report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Since 1995, GAO has focused on the chemical and biological defense area, which has resulted in a series of reports and testimonies before Congress on DOD’s efforts to prepare troops to survive and operate in a chemically and biologically contaminated environment. Major problem areas have included shortfalls in equipment, training, and reporting and weaknesses in coordinating program research and development activities. Although DOD has taken significant actions to improve the program and has increased its funding, serious problems still persist. Our first major report, issued in March 1996, discussed the overall capability of U.S. forces to fight and survive chemical and biological warfare and is the centerpiece for much of the work we have performed since then. We reported that DOD was slow in responding to the lessons learned during the Gulf War. Specifically, early deploying units lacked required equipment such as chemical detector paper, decontamination kits, and sufficient quantities of protective clothing; Army and Marine forces remained inadequately trained for effective chemical and biological defense; joint exercises included little chemical or biological defense training; Army medical units often lacked chemical and biological defense equipment and training; biological agent vaccine stocks and immunization plans remained research and development progress was slower than planned. We also reported that the Joint Chiefs of Staff’s Status of Resources and Training System (SORTS) — DOD’s system for reporting the overall readiness of units — was of limited value in determining the readiness of units to operate in a chemically or biologically contaminated environment. The system was established to provide the current status of specific elements considered essential to readiness assessments, such as personnel and equipment on hand, equipment condition, and training. However, we found that this system allowed commanders to report their unit’s overall readiness subjectively regardless of the unit’s actual readiness to operate in a chemically or biologically contaminated environment. We concluded that chemical and biological defense equipment, training, and medical problems were persisting and, if not addressed, were likely to result in needless casualties and a degradation of U.S. war fighting capability. We noted that despite DOD’s increased emphasis on chemical and biological defense, it continued to receive a lower priority than traditional mission tasks at all levels of command. Many field commanders accepted a level of chemical and biological defense unpreparedness and told us that the resources devoted to that area were appropriate, given other threat concerns and budgetary constraints. When we looked again in 2000 at the readiness of early deploying U.S. forces to operate in a chemically or biologically contaminated environment, we found the situation generally improved. Units we reviewed included three Army divisions, two Air Force fighter wings, and one Marine Corps expeditionary force. Military units are generally expected to have at least 70 percent of their equipment requirements on hand. The units we visited had all their required individual protective equipment (such as suits, boots, and gloves) and most chemical and biological medical supplies and detection and decontamination equipment needed to operate in a chemically or biologically contaminated environment. In the medical arena, the Army divisions had all their needed medical supplies. The Air Force wings had most of their medical supplies, but we noted shortages of some critical items. For example, one wing had only 25 percent of the protective masks required to treat contaminated patients and only 48 percent of required patient decontamination kits. The units we visited had shortages in detection and decontamination equipment, but these shortages varied both across and within the services. For example, one Marine Corps unit and one Air Force unit had 31 percent and 50 percent, respectively, of their chemical agent monitors, whereas the other Air Force unit had 100 percent of its monitors. The three Army units we reviewed had between 88 and 103 percent of their requirements for the same item. Officials at the units with shortages of equipment said that when the units deploy, the shortages would be filled from stocks held by later deployers or from war reserves. However, the units had not determined whether this solution would meet their equipment requirements or what impact this action might have on the later deploying units’ capabilities or on war reserves. The medical readiness of some units to conduct operations in a contaminated environment therefore remained questionable. Chemical and biological defense training continues to be a problem area. We reported in 1996 that commanders were not integrating chemical and biological defense into unit exercises and that the training was not always realistic in terms of how units would operate in wartime. For example, Marine Corps commanders did not fully integrate chemical and biological defense into unit exercises, as required by Marine Corps policies, because operating in protective equipment is difficult and time consuming and this (1) decreases the number of combat essential tasks that can be performed during an exercise and (2) limits offensive combat operations. Officials stated that chemical and biological defense training is still being adversely impacted by (1) a shortage of chemical and biological defense specialists and (2) the fact that these specialists are often assigned multiple responsibilities unrelated to their specialties. For example, Army units we reviewed had from 76 to 102 percent of their authorized enlisted chemical personnel and from 75 to 88 percent of their chemical officers. The Marine Corps unit we visited had 84 percent of its authorized enlisted chemical specialists and 80 percent of its chemical officers. We also reported that DOD’s monitoring of chemical and biological defense readiness has improved since our 1996 report. In April 2000, the Joint Chiefs of Staff directed changes to the Status of Resources and Training System that would require units to report more clearly on the quantity of chemical and biological equipment on hand and on training readiness. However, we noted the changes do not require that units report on the condition of their chemical and biological defense equipment. Thus, these reports could indicate that a unit had its chemical and biological equipment, but they would not show whether this equipment was serviceable. We have issued a series of reports that address DOD’s coordination of chemical and biological defense research and development programs. For example, in September 1998 we reported on DOD’s approach to addressing U.S. troop exposures to low levels of chemical warfare agents.Low-level exposure is a concern because it may potentially cause or contribute to health problems that may not become evident for years after exposure. Specifically, we reported that: DOD did not have an integrated strategy to address exposure to low levels of chemical warfare agents. Past research by DOD and others indicated that single and repeated low- level exposures to some chemical warfare agents could result in adverse psychological, physiological, behavioral, and performance effects that may have military implications. We also highlighted limitations of the current research. DOD had allocated nearly $10 million (about 1.5 percent) of its chemical and biological defense research, development, testing, and evaluation program to fund projects on low-level chemical warfare agent exposures. In August 1999 we reported on the coordination of federal research and development efforts to develop nonmedical technology related to chemical and biological defense, an issue that DOD has not addressed until recently. We identified four programs engaged in activities ranging from applied research to prototype development: two of these programs developed technologies primarily for military war fighting applications, and two others developed technologies primarily to assist civilians responding to terrorist incidents. We concluded that the formal and informal program coordination mechanisms may not ensure that potential overlaps, gaps, and opportunities for collaboration would be addressed. We highlighted that agency officials were aware of the deficiencies in the existing coordination mechanisms and that some had initiated additional informal contacts. We are currently reviewing the effectiveness of DOD’s research and testing activities in providing the scientific information needed to address doctrinal, policy, and procedural shortcomings affecting DOD’s ability to operate in a chemically contaminated environment, as well as DOD’s approach to ensure the survivability of mission-essential systems in the case of a chemical or biological attack. DOD’s work in this area is crucial for developing the means to assure the restoration of operations in the event of chemical and biological attacks on U.S. forces at critical overseas depots, ports, and airfields. Individual protection is a critically important component of the overall chemical and biological defense program. DOD has recognized that military service members may not be able to avoid exposure to chemical and biological agents and has consequently provided U.S. forces with individual protective equipment, including clothing ensembles. We have conducted several recent reviews on this subject and are continuing to focus on DOD’s acquisition and management of this equipment because of the potential for increased risks in this area. Specifically, our primary concerns involve DOD’s (1) process for assessing the risk of wartime protective equipment shortages, (2) plans for addressing projected suit shortages due to the expiration by 2007 of most of the existing inventory, and (3) related inventory management and business practices. After updating equipment status and trends, we will discuss our recent reports and ongoing work in this area. Until recently, DOD calculated its chemical and biological defense equipment needs in one of two ways: by assessing either how much would be needed to prevail in two nearly simultaneous major theater wars (often referred to as the “2-MTW” requirement), or how much would be needed to fight two MTWs as well as maintaining supplies for peacetime and training use, the “total service requirement.” In its most recent Annual Report to Congress, for example, DOD reported both inventory and these requirements for each item as of the end of fiscal year 2001. The report shows that several items, particularly in Navy stocks, qualify as “high-risk;” that is, less than 70 percent of needed equipment is on hand. Other items, such as masks, are “low-risk;” that is, the services have more than 85 percent of the needed equipment on hand. (We have been able to update some of the data, in which we generally found only modest changes from the data we show here.) Figure 1 shows these inventory levels, by service, for key components of the protective clothing ensemble. We found, though, that the raw data may understate the real risk because the method that DOD has used to calculate risk may be flawed. In September 2001, we reported that DOD’s criteria for assessing the risk of wartime shortages for protective clothing are unreliable. At that time we found that DOD had inaccurately reported the risk in most cases as “low.” We reported that the process for determining risk is fundamentally flawed because (1) DOD determines requirements by individual pieces of protective equipment — suits, masks, breathing filters, gloves, boots, and hoods — rather than by the number of complete protective ensembles that can be provided to deploying service members, and (2) the process for determining risk combines individual service requirements and reported inventory data into general categories, masking specific critical shortages that affect individual service readiness. Had DOD assessed the risk on the basis of the number of complete ensembles it had available, by service, the risk would have risen to “high” for all of the services. As a result of the September 2001 Quadrennial Defense Review, DOD has begun to reexamine its requirements. At present, there are several requirements levels against which inventory is measured. Official reports have commonly used the “2 Major Theater War” and the “Total Service Requirement” standards. New interim guidance indicates that DOD should be able to fully meet conflict equipment needs in one theater, while meeting only partial requirements in another. This requirement, which is expected to be finalized when DOD publishes the Illustrative Planning Scenario annex to its Defense Planning Guidance, is referred to as the “150 percent of an MTW” option. Whatever the official requirement, the risk to U.S. forces may be increasing for two reasons. First, DOD has not yet revised its risk assessment process to consider ensemble needs and service imbalances. Second, suit shortages are projected to escalate in the next few years because (1) the majority of suits in the current inventory will reach the end of their useful life and expire by 2007, and (2) new Joint Service Lightweight Integrated Suit Technology (JSLIST) suits, along with other new generation protective ensemble components such as gloves and boots, are not entering the inventory as quickly as originally planned. Consequently, the old suits are expiring faster than they are being replaced. We are concerned that some ensemble components, particularly suits, may not be available in adequate numbers to meet near-term minimum requirements. As of August 30, 2002, DOD had procured about 1.5 million of the new JSLIST suits, of which the majority were issued to the military services. (Others are held in Defense Logistics Agency reserves, provided to foreign governments under the Foreign Military Sales program, or allocated to domestic uses.) Together with the existing inventory of earlier-generation suits, we estimate that DOD has a total of 4.5 million suits. This level is now barely sufficient to meet the new requirement to supply 150 percent of an MTW. It is far below the Army-chaired Joint Nuclear, Biological, and Chemical Defense Board requirement, called the Joint Acquisition Objective, which combines elements of DOD and service calculations. If new suit funding and production does not increase sufficiently to replace the expiring suits, the inventory will even drop below minimal needs for the 150 percent of an MTW requirement until at least 2007. The risk for protective clothing shortages may therefore increase dramatically during this period. Figure 2 illustrates this trend. Inadequate management of inventory is an additional risk factor because readiness can be compromised by DOD’s inventory management practices, which prevent an accurate accounting of availability or adequacy of DOD’s protective equipment. The practices we identified regarding inventories of chemical and biological equipment contribute to the development of erroneous inventory data that in turn affect the accuracy of the risk assessment. Specifically, we reported the following: DOD could not monitor the status of the entire inventory of protective equipment because the services and the Defense Logistics Agency use at least nine different systems of inventory management with differing data fields to manage suit inventories. The systems’ records contain data that cannot be easily linked. DOD could not determine whether its older suits would adequately protect service members because some of the systems’ records omit essential data on suit expiration. DOD could not easily identify, track, and locate defective suits because inventory records do not always include contract and lot numbers. In May 2000, DOD directed units and depots to locate 778,924 defective suits produced by a single manufacturer; as of July 2002, as many as 250,000 of these suits remained unaccounted for. DOD counted new suits as on hand before they had been delivered and consequently overstated the actual inventory. In response to one of our report recommendations, DOD now reports “on hand” and “due-in” suits separately in its Annual Program Report to the Congress. We have also testified before this Committee as part of our work on the need for DOD to reform its business operations. We noted that inventory management procedures related to JSLIST suits, systems, and processes result in DOD, the military services, and military units not knowing how many items they have and where they are located. DOD’s business processes for procuring, controlling, and paying for JSLIST suits rely on manual data transmission and entry into nonintegrated data systems. We identified 128 processing steps performed by 11 DOD components, such as the Defense Logistics Agency, Defense Finance and Accounting Service, and the military services. Of the 128 steps, 100 steps, or 78 percent, involved manual entry or re-entry of data into one or more of the 13 nonintegrated data systems supporting the JSLIST processes. However, the complex, nonintegrated, error-prone process precludes DOD from being able to quickly and accurately identify the suits’ location and condition. Further, at the military units that GAO visited, the methods used to control and maintain visibility over JSLIST suits issued to them ranged from automated information systems, to spreadsheet applications, to paper, to dry eraser board, to none. The data maintained also varied. Some units maintained specific data, including manufacturer, manufacture date, and production lot number, while other units maintained little or no data. DOD is now taking steps to correct this problem and improve asset visibility at all levels. As recently as 2000 there was no single office that tracked all JSLIST suit production and fielding DOD-wide, for example, and the annual report to Congress was compiled by data calls to each individual service and major command within the services. Now there is such an office: the Marine Corps, in its role as commodity area manager for individual protection, can report new production of JSLIST ensemble items (suits, boots, and gloves) and the services to which they have been fielded. Our work to date has found that the Marine Corps program office has established an effective system for managing this information. We are currently reviewing factors related to JSLIST production and the implications of the removal of the expiring suits from the inventory. Our work will (1) evaluate whether DOD’s requirements and activities for acquiring and sustaining chemical protective equipment provide the military with sufficient usable chemical and biological protective clothing ensembles; (2) assess DOD’s current risk assessment, testing, development, and production procedures; and (3) evaluate the effectiveness of DOD’s actions to mitigate any shortfalls. We plan to report our results early next year. Our body of work over 7 years highlights a serious gap between the priority given chemical and biological defense by DOD and the actual implementation of the program. Both the 1997 and 2001 Quadrennial Defense Reviews identified chemical and biological defense as key priorities of the Department of Defense. Although the program overall is clearly improved and better funded since 1995, many of the problems we previously reported still have not been resolved. We are concerned that DOD’s efforts to implement this program are not consistent with the emphasis given to it in overall department guidance. Organization complexity, vacancies in key positions, and priority conflicts are all factors that have contributed to program difficulties and, if not resolved, will continue to weaken DOD’s management of this program. The management of the Chemical and Biological Defense program is diffuse, with numerous offices and activities responsible for separate aspects, notwithstanding the National Defense Authorization Act for Fiscal Year 1994’s (P.L. 103-160) attempt to bring oversight under one organizational authority. Concurrence on program direction is therefore sometimes difficult to achieve. This act required the Secretary of Defense to assign responsibility for overall coordination and integration of the Chemical and Biological Defense program to a single office within the Office of the Secretary of Defense (OSD), and to designate the Army as executive agent to coordinate and integrate the chemical and biological research, development, test and evaluation, and acquisition requirements of the military departments. Although this office was established shortly thereafter, many aspects of DOD’s management of chemical and biological defense remain spread between this office, the military services, and other DOD organizations. Furthermore, each individual service also has numerous offices devoted to various aspects of chemical and biological defense, including planning, logistics, and acquisition. The services purchase their own consumable items such as protective suit replacements under their role of managing their own operations and maintenance funds; a process over which OSD has limited visibility. Figure 3 depicts the current organization for DOD’s management of its Chemical and Biological Defense Program (CBDP), as well as some of the changes now being implemented or under consideration. The OSD office at the Assistant Secretary level that is charged with overall coordination of the Chemical and Biological Defense Program also went through upheaval during the latter part of the 1990s. The position was initially slated for elimination under the terms of the 1997 Defense Reform Initiative (DRI). As a result of the DRI, OSD oversight functions were transferred to a different staff office within the Office of the Secretary of Defense (Director, Defense Research and Engineering), while management and most staffing of the program were transferred to a directorate within the Defense Threat Reduction Agency (DTRA). This directorate, in turn, has had five directors in less than 4 years. We also believe that the emphasis DOD placed on the Chemical and Biological Defense Program was adversely affected by the absence of leadership at the Assistant Secretary level for nearly 4 years. In accordance with P.L. 103-160, the Secretary designated the Assistant to the Secretary for Nuclear, Chemical, and Biological Defense (ATSD) as the principal officer responsible for oversight and coordination of the program. However, this position was vacant from 1998 through late 2001. The Deputy ATSD, who exercises day-to-day oversight over the program, was also vacant for more than a year during that period. We believe these OSD vacancies adversely affected the high-level attention received by the program as well as its ability to compete for funding against other defense needs, thereby sending a message throughout the Department about the relative priority and importance attached to the program. DOD has requested almost $1.4 billion for the Chemical and Biological Defense Program in fiscal year 2003 — more than three times the fiscal year 1994 amount. Nevertheless, the program has consistently had difficulty competing against other service priorities, such as those associated with traditional mission tasks. Despite the emphasis placed on this program by the Quadrennial Defense Review, spending on chemical and biological defense represents about a third of a percent of the entire $369 billion DOD budget request. DOD officials and field commanders alike have repeatedly stressed that they must balance chemical and biological defense requirements against all other defense needs, and do so within a constrained budget environment. For example, as we reported in 1996, officers have cited other-than-war deployments, quality of life considerations, and peacetime medical care as higher priorities than chemical and biological defense. We have previously recommended that chemical and biological defense needed direct representation by a general officer on the Joint Staff in order to receive the appropriate program emphasis and support. DOD has recently implemented this change. It remains to be seen what the effect of this change will be. Figure 4 shows the growth in Chemical and Biological Defense Program funding since fiscal year 1994. There is also competition within the program between the main categories of research and development and procurement. At present, some components of the clothing ensemble, such as the JSLIST glove and next- generation mask, are in the developmental phase; others, like the JSLIST suit, are in procurement. In deciding how much money to allocate to each of the various categories and specific projects, DOD relies on the Joint Priority List, which integrates and rank-orders the preferences of combatant commanders for all chemical and biological equipment needs. On this year’s Joint Priority List, for example, the JSLIST suit ranked 35 out of 72 items. Biodetection capabilities occupied the first spaces on that list. In fiscal year 2003, $96 million is earmarked for the procurement of JSLIST suits. Conflicts over internal program priorities thus can also affect issues such as shortages of JSLIST suits.
The Department of Defense (DOD) believes it is increasingly likely that an adversary of the United States will use chemical or biological weapons against U.S. forces to degrade superior U.S. conventional warfare capabilities, placing service members' lives and effective military operations at risk. During the past 6 years, GAO has identified many problems with DOD's capabilities to defend against chemical and biological weapons and sustain operations in the midst of their use. Although GAO has found that DOD has made some improvements--in equipment, training, and reporting, and in the coordination of research and development activities--it has continuing concerns in each of these areas. One particular issue is the supply of chemical protective clothing and the way associated risk is assessed. Due to the upcoming expiration of existing protective suits, the slower rate at which new suits are entering the inventory, and DOD's method of assessing risk for individual items rather than complete protective ensembles, GAO believes that the risk for protective clothing shortages may increase dramatically from now through 2007. GAO is also concerned that certain management weaknesses, such as program organizational complexity and prolonged vacancies in key leadership positions, may have sent a message throughout the department about the relative priority and importance of the Chemical and Biological Defense Program.
America’s interests in space, according to the National Space Policy, are to support a strong, stable, and balanced national space program that serves our goals in national security, foreign policy, economic growth, environmental stewardship, and scientific excellence. DOD policy states that space—like land, sea, and air—is a medium within which military activities shall be conducted to achieve national security objectives. The national security space sector is primarily comprised of military and intelligence activities. The Air Force is DOD’s primary procurer and operator of space systems and spends the largest share of defense space funds, annually averaging about 85 percent. The Army controls a defense satellite communications system and operates ground mobile terminals. The Navy operates several space systems that contribute to surveillance and warning and is responsible for acquiring the Mobile User Operations System, the next generation Ultra High Frequency satellite communication system. The U.S. Strategic Command is responsible for establishing overall operational requirements while the services are responsible for satisfying these requirements to the maximum extent practicable through their individual planning, programming, and budgeting systems. The Air Force Space Command is the major component providing space forces for the U.S. Strategic Command. The NRO designs, procures, and operates space systems dedicated to intelligence activities. The National Security Space Architect develops and coordinates space architectures for future military and intelligence activities. The Office of the Secretary of Defense, the Marine Corps, and other DOD agencies also participate in national security space activities. The Office of National Security Space Integration, which reports to the Under Secretary of the Air Force and Director, NRO, facilitates integration of military and intelligence activities and coordinates implementation of best practices among agencies. The management and organization of national security space programs and activities has received continual congressional attention since the early 1990s. In 1995, DOD responded to congressional concerns about the lack of a coherent national security space management structure by consolidating certain space management functions within a new Office of the Deputy Under Secretary of Defense for Space. However, in 1998, under a defense reform initiative, DOD abolished this office and dispersed the management functions among other DOD offices, primarily the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence and the Under Secretary of Defense for Acquisition, Technology, and Logistics. The Space Commission noted that the United States has an urgent interest in protecting the access to space and developing the technologies and capabilities to support long-term military objectives. It stressed the need to elevate space on the national security agenda and examine the long-term goals of national security space activities. The Space Commission provided a total of 16 recommendations, including a call for presidential leadership to set space as a national security priority and provide direction to senior officials. However, 13 of the Space Commission’s recommendations were directed at DOD and focused on near- and mid-term management and organizational changes that would merge disparate activities, improve communication channels, establish clear priorities, and achieve greater accountability. The Secretary of Defense directed a number of organizational changes to improve leadership, responsibility, and accountability for space activities within DOD in response to the Space Commission’s report. After some delays, most are complete or nearing completion, although it is too early to assess the effects of these changes. The Space Commission found that DOD’s organization for space was complicated with various responsibilities delegated to different offices within the department. For example, the Space Commission determined that it was not possible for senior officials outside DOD to identify a single, high-level individual who had the authority to represent DOD on space-related matters. Further, the commission noted that no single service had been assigned statutory responsibility to “organize, train, and equip” for space operations. The commission provided 13 recommendations to DOD intended to improve the focus and accountability within the national security space organization and management. As we reported in our June 2002 assessment, the Secretary of Defense decided to implement 10 of the Space Commission’s 13 recommendations while opting to take alternative actions for the remaining 3. In a May 8, 2001, letter to the defense and intelligence oversight committees, the Secretary stated that the department would not implement the Space Commission’s recommendation to create an Under Secretary of Defense for Space, Intelligence, and Information. DOD also did not seek legislation to give the Air Force statutory responsibility to organize, train, and equip space forces, as recommended. Rather, the Secretary said the department would address these organizational and leadership issues with alternative actions. For example, DOD elected not to create a new office to integrate military and intelligence research efforts, deciding instead to increase coordination among existing offices. At the time of our last report, DOD had completed action to implement six of the recommendations, and four were in the process of being implemented. DOD has now completed action on three more, with actions on the remaining recommendation still in progress. See appendix I for information on the status of each of the Space Commission’s 13 DOD-specific recommendations. To address some of the Space Commission’s specific recommendations as well as additional opportunities that the department identified for improving the organization and management of its space activities, the Secretary of Defense issued a memorandum in October 2001 that directed actions to: assign the Under Secretary of the Air Force as Director, NRO; designate the Under Secretary of the Air Force as the Air Force Acquisition Executive for Space; delegate program milestone decision authority for DOD space major defense acquisition programs and designated space programs to the Under Secretary through the Secretary of the Air Force; realign the Office of the National Security Space Architect to report to the Director, NRO (who is also the Under Secretary of the Air Force) and make the Architect responsible for ensuring that military and intelligence funding for space is consistent with policy, planning guidance, and architectural decisions; designate the Secretary of the Air Force as DOD executive agent for space with redelegation to the Under Secretary of the Air Force; assign the Air Force the responsibility for organizing, training, equipping, and providing forces as necessary for the effective prosecution of offensive and defensive military operations in space; realign Air Force headquarters and field commands to more closely integrate space acquisitions and operations functions; and assign responsibility for the Air Force Space Command to a four-star officer other than the Commander of the U.S. Space Command (now merged with U.S. Strategic Command) and North American Aerospace Defense Command to provide dedicated leadership to space activities. By appointing the Under Secretary of the Air Force as the Director, NRO, and the Air Force acquisition executive for space, as well as designating the Under Secretary DOD’s executive agent for space, the Secretary of Defense provided a focal point for DOD space activities. The Space Commission recommended the designation of a single person as Under Secretary of the Air Force; Director, NRO; and Air Force acquisition executive for space to create a senior-level advocate for space within DOD and the Air Force and represent space in the Air Force, NRO, and DOD planning, programming, and budgeting process. In addition, the authority to acquire space systems for the Air Force and NRO is intended to better align military and intelligence space acquisition processes. In explaining the rationale for this change, senior DOD officials told us that the barriers between military and intelligence space activities are diminishing because of the current need to support the warfighter with useful information from all sources. In an effort to improve space acquisitions and operations, joint Air Force and NRO teams have been working to identify the best practices of each organization that might be shared, according to Air Force and NRO officials. These teams have recommended what they believe to be 37 best practices to the Under Secretary of the Air Force in the areas of acquisition, operations, launch, science and technology, security, planning, and programming. Joint efforts to identify best practices are continuing in the areas of requirements, concepts of operation, personnel management, financial management, and test and evaluation. The Space Commission recommended formal designation of the Air Force as executive agent for space with departmentwide responsibility for planning, programming, and acquisition of space systems, and the Secretary of Defense stated in his October 2001 memorandum that the Air Force would be named DOD executive agent for space within 60 days. However, the directive formally delineating the Air Force’s new roles and responsibilities and those of the other services in this area has not been finalized. Air Force officials said they hoped it would be finalized in early 2003. Until the directive designating the Air Force as executive agent for DOD space is signed, the Air Force cannot formally assume the executive agent duties that the Space Commission envisioned. In the meantime, the Air Force has begun to perform more planning and programming duties. During the delay in the formal delegation of authority, the Air Force and other services and defense agencies have begun collaborating on space issues in accordance with the Secretary’s intent. After the directive is released, the executive agent for space expects to be tasked to develop an implementation plan that will articulate processes and procedures to accomplish DOD’s space mission. The Air Force has realigned its headquarters to support the Air Force Under Secretary’s efforts to integrate national security space activities and perform new duties as the executive agent for DOD space. The Under Secretary of the Air Force has established an Office of National Security Space Integration to implement the executive agent duties across DOD, coordinate the integration of service and intelligence processes and programs, develop streamlined national security space acquisition processes, and lead the development of a management framework for space activities. Although this office is located within the Air Force and NRO, it will consist of members from all the services and some defense agencies. Figure 1 shows DOD’s and the Air Force’s new organization for supporting national security space activities. Also in response to a Space Commission recommendation, the Air Force reorganized its field commands to consolidate the full range of space activities—from concept and development, to employment and sustainment of space forces—within the Air Force Space Command. To consolidate the acquisition and operations functions, the Air Force Space and Missile Systems Center was separated from the Air Force Materiel Command and became part of the Air Force Space Command. According to the Commander, Air Force Space Command, the consolidation of these functions in the same command is unique and should improve communications while exposing personnel to both acquisition and operations. According to Air Force officials, this new arrangement will enable space system program managers who have been responsible for acquiring space systems—such as the Global Positioning System—to help generate new concepts of operations. Conversely, the arrangement will also enable space system operators to develop a better understanding of the acquisitions processes and acquire new skills in this area. To provide better visibility of DOD’s and the Intelligence Community’s level and distribution of fiscal and personnel resources, as the Space Commission recommended, DOD and the Intelligence Community developed a crosscutting or “virtual” major force program by aggregating budget elements for space activities across DOD and the Intelligence Community. This virtual space major force program identifies and aggregates space-related budget elements within DOD’s 11 existing major force programs. According to DOD officials, having a crosscutting major force program for space activities is logical because space activities span multiple program areas, such as strategic forces and research and development. The space major force program covers spending on development, operation, and sustainment of space, launch, ground, and user systems, and associated organizations and infrastructure whose primary or secondary missions are space-related. DOD included the space major force program in its Future Years Defense Program for fiscal years 2003 to 2007 and identified $144 billion in space spending planned for this period. The Under Secretary of the Air Force said he used the virtual major force program to facilitate examination of the services’ space program plans and budgets. The Secretary of Defense tasked the National Security Space Architect with reporting on the consistency of space programs with policy, planning, and architecture decisions. During the spring and summer of 2002, the Architect led the first annual assessment of the programs included in the space virtual major force program and some related programs. Teams of subject matter experts from DOD, Intelligence Community, and civilian agencies involved in space programs reviewed the services’ and Intelligence Community’s proposed budgets for future space spending to identify capabilities gaps and redundancies while evaluating whether budget requests adhered to departmental policy and guidance. The Architect provided the classified assessment results to the Under Secretary, as well as the Secretary of Defense, the Director of Central Intelligence, and other senior DOD and Intelligence Community leaders, to support decision-making on space programs during the fiscal year 2004 budget review. It is too early to assess the effects of DOD’s organizational changes for its space programs because new institutional roles, processes, and procedures are still evolving, and key documents are not yet finalized. According to DOD officials, some delays in implementing the recommendations can be attributed to the time needed to select and confirm the pivotal senior leadership for national security space, and for the new leaders to direct changes in processes and procedures. For example, the Senate confirmed the Under Secretary of the Air Force on December 7, 2001, and new directorates within his office were established on April 15, 2002, to begin national security space integration and acquisition activities. Similarly, DOD created a separate four-star position of Commander, Air Force Space Command, separating the command of the Air Force Space Command from the Commander, U.S. Space Command/North American Aerospace Defense Command. However, the new Commander, Air Force Space Command, did not assume command until April 19, 2002. Developing policy and guidance to implement organizational changes took longer than the 30 to 120 days specified in the Secretary of Defense’s memorandum of October 18, 2001 (see app. II for a time line of major events in the reorganization). For example, the directive that would designate the Air Force as executive agent for DOD space is still in draft over a year after the memorandum. As DOD’s efforts to build a more coherent organizational structure for managing national security space activities near completion, the department’s progress in addressing long-term management challenges has varied. DOD increased funding for space science and technology activities in fiscal year 2004 and plans future increases. Also the department is drafting a new acquisition process for space systems that is intended to reduce the time to develop and acquire space systems, but the process has not been fully tested and validated. Finally, DOD has not established a human capital strategy to develop and maintain a cadre of space professionals that will guide the space program in the future, and none of the services has developed and implemented its own space cadre plans or established time frames for completing such plans. Between fiscal years 2003 and 2007, DOD plans to increase its budget for space science and technology by almost 25 percent, from about $975 million in 2003 to over $1.2 billion in 2007. In addition, DOD plans by 2009 to spend over $1.8 billion for space science and technology, or almost two times the fiscal year 2003 budget. According to the Director of the Defense Advanced Research Projects Agency (DARPA), the Space Commission’s report’s emphasis on increased investment in space-based technology was the impetus for significant increases in space research and development funding over the next 5 years—from $235 million in fiscal year 2003 to $385 million by fiscal year 2007 as shown in the fiscal year 2004 President’s budget request. Under current plans, DARPA will receive most of these funds. The Director said that over the years the agency’s concentration on space-based technologies varied and that just prior to the Space Commission report, ongoing space efforts were at a low point. The Director also said that investments in space are consistent with the agency’s charter to solve national-level technology problems, foster high-risk/high-payoff military technologies to enable operational dominance, and avoid technological surprise. Innovative space technology studies currently underway, including the “Responsive Access, Small Cargo, Affordable Launch” and “Orbital Express” efforts, are a direct result of the Space Commission report. The Air Force is the next largest recipient of increased funding for space research and engineering with an expected budget increase of more than $89 million between 2003 and 2007. The Army and the Navy have smaller shares of space-related research funding and, according to service officials, project small budget increases. DOD recently completed a departmentwide assessment of space science and technology that it intends to use to direct the priorities of future research. However, whether planned funding increases will become available in view of other departmental priorities is uncertain. DOD is taking steps it hopes will streamline the acquisition process and reduce the time it takes to acquire space-based systems required by the national security space community. The Air Force has developed a new space system acquisition decision process designed to shorten time frames for technical assessments and facilitate faster decision-making. This approach will establish key decision points based on program maturity and provide more oversight earlier in the development of complex satellite technology. It will also reduce the number of independent cost estimates performed at each key decision point from two to one and employs a full time, dedicated independent assessment team to perform technical reviews in less time at each decision point. Having milestone decision authority, the Under Secretary of the Air Force determines whether major space systems should proceed to the next phase of development. The Under Secretary serves as chair of the Defense Space Acquisitions Board, which oversees the new acquisition process. However, the guidance for executing acquisition procedures is still in draft, and the draft acquisition process is still being validated. DOD has used the new process for milestone decisions on three space systems—the National Polar-Orbiting Operational Environmental Satellite System, the Mobile User Objective System, and the latest generation of Global Positioning System satellite vehicles—that had been started under the previous acquisition system. Officials said that the process had been successful in that it enabled the Air Force to make better and faster decisions by identifying problems early that needed to be resolved before the system proceeded into the next development phase. The Space Based Radar promises to be the first system to begin the acquisition process under the new system. Early identification of potential problems is essential in the acquisition process, particularly in regard to issues such as design stability, sufficient funding, requirement stability, realistic schedules, and mature technology. As we have previously reported, DOD programs, including some space programs, have experienced problems when these elements have not been sufficiently addressed. For example, the Advanced Extremely High Frequency satellite program continued to move through the acquisition process despite frequent changes to its requirements and experienced cost overruns and schedule delays. The Space Based Infrared systems also experienced increased cost and schedule delays. Congress has repeatedly expressed concerns about the cost overruns and schedule delays of these defense space programs and expected that any changes underway to reduce decision cycle time for space programs should not detract from the ability of the Office of the Secretary of Defense and the Joint Requirements Oversight Council to provide meaningful oversight of space programs. Consequently, in the National Defense Authorization Act for 2003 (section 911(b)), Congress directed the Office of the Secretary of Defense to maintain oversight of space acquisitions and submit a detailed oversight plan to Congress by March 15, 2003. DOD does not have a strategic approach for defense space personnel that could better guide the development of the individual services’ space cadre plans to support the department’s strategic goals. The Space Commission noted that from its inception the defense space program has benefited from world-class scientists, engineers, and operators, but now many experienced personnel are retiring and the recruitment and retention of qualified space personnel is a problem. The net effect of a workforce that is not balanced by age or experience puts at risk the orderly transfer of institutional knowledge. Further, the commission concluded that DOD does not have the strong military space culture—including focused career development and education and training—it needs to create and maintain a highly trained and experienced cadre of space professionals who can master highly complex technology as well as develop new concepts of operation for offensive and defensive space operations. In October 2001, the Secretary of Defense directed the military services to draft specific guidance and plans for developing, maintaining, and managing a cadre of space professionals to provide expertise within their services and joint organizations. However, the Secretary did not direct development of a departmentwide space human capital strategy to ensure that national security space human capital goals, roles, responsibilities, and priorities are clearly articulated so that the service implementation plans are coordinated to meet overall stated requirements. The Army, Navy, and Air Force have each produced initial guidance on developing and managing their own space professionals. However, none of these provide details about how the individual service will proceed with developing and implementing plans for addressing service and joint force requirements in future years, or time frames for implementing space cadre management plans. The services’ plans are still being developed, and we were not afforded access to the draft plans to assess their completeness and viability nor were we given firm estimates of when they might be completed and implemented. However, service officials told us that planning to date has focused on the military officer corps and has not included the enlisted or civilian personnel who also support space operations. In conjunction with space cadre planning, the services outlined some initiatives to increase space education for all military personnel, but these have not been fully implemented. While each service has separately begun planning to build and maintain a service space cadre, the services have not yet begun to coordinate their plans across DOD to ensure a shared direction and time frames. The Under Secretary of the Air Force said that other areas of space operations, such as acquisitions, have taken priority but that he plans to devote more attention to this area to achieve greater progress. The Department of Defense has produced some policies and guidance to implement its space program, but it has not completed a comprehensive strategy or an implementation plan to guide the program and monitor its results. DOD is in the process of developing some elements of a results-oriented management framework, such as a national security space strategy, an annual national security space plan, and a directive formalizing the Air Force’s role as an executive agent for space. According to officials in the Office of National Security Space Integration responsible for developing the strategy and plan, these documents along with the annual assessment of the services’ space budget proposals will enable the executive agent for DOD space to track the extent to which resources are supporting national security space priorities. Officials also said that as executive agent for space, the Air Force plans to report on its progress to officials in the Office of the Secretary of Defense although the content and process that will be used is still being developed. However, DOD did not provide us drafts of the national security space strategy and plan or the executive agent directive; therefore, we could not assess whether these documents comprise a results-oriented management framework or specifically how DOD will provide department-level oversight of the Air Force’s activities as executive agent for space. Management principles embraced in the Government Performance and Results Act of 1993 provide agencies at all levels with a framework for effectively implementing and managing programs, and shift the program management focus from measuring program activities and processes to measuring program outcomes. Table 1 more fully describes these principles and their critical elements. These principles and critical elements, when combined with effective leadership, can provide a results-oriented management framework to guide programs and activities at all levels. These management tools are designed to provide the agencies, Congress, and other decisionmakers a means to understand a program’s evolution and implementation as well as to determine whether initiatives are achieving their desired results. DOD has established some elements of a results-oriented management framework for space programs that are embedded in various directives, guidance, and instructions. For example, the Sept. 30, 2001, Quadrennial Defense Review forms the backbone for the development and integration of DOD’s missions and strategic priorities, and details six operational goals including one to enhance the capability and survivability of U.S. space systems. DOD views the review as its strategic plan, in compliance with Government Performance and Results Act requirements, and, as such, the review forms the foundation from which DOD’s results-oriented performance goals are identified and progress is measured. Additionally, the September 1996, National Space Policy prepared by the White House National Science and Technology Council provides broad guidance for civil, commercial, national security, and other space sectors. Although DOD’s space goals are linked to the overall national military policies, DOD has not developed all elements of a management framework to effectively manage DOD’s space operations or measure their progress. The Office of National Security Space Integration is in the process of developing a national security space strategy and plan that will set out priorities to guide planning and budgeting across the department and better integrate military and intelligence space activities. The strategy and plan will form a roadmap for achieving space goals in the near- and mid- term, according to an official developing these documents. These documents will be key to setting research, development, and operational goals and integrating future space operations in the military and intelligence communities. According to National Security Space Integration Office officials, the national security space strategic plan will be linked to the overarching National Space Policy and existing long-range space strategies and plans such as those of the NRO, National Security Space Architect, and the military services. These officials told us that the national security space strategy and plan and the annual assessment by the National Security Space Architect of whether the services’ budgets are consistent with policy, planning guidance, and architectural decisions, will be key components of their space management approach. However, officials said that they have not yet determined performance goals and measures to assess program implementation progress and ascertain whether program initiatives are achieving their desired results. Until such plans are finalized, DOD cannot be sure that it is investing its resources in the best way possible to support current and future requirements for space operations. National Security Space Integration Office officials said they hope to release the national security space strategy and plan in early 2003, but they did not provide us a copy of the draft strategy or plan. Therefore, we could not determine the extent to which these documents contain all the key elements of a results-oriented management framework. A framework to lead and manage a space program effectively requires a program-specific strategy and performance plan to implement actions. However, to date DOD has not established specific space objectives that are linked to overall program goals and resource requirements, nor has it established specific performance goals or other mechanisms to measure program outcomes. In its 2000 Annual Report to the President and Congress, DOD provided a performance plan for achieving its annual performance goals, but it did not include performance goals and measures for space activities in that report. Without a results-oriented management plan, linked to higher-level strategies, the services do not have clearly defined space objectives and milestones to guide their initiatives, nor does DOD have a mechanism to ensure successful accomplishment of integrated efforts without gaps and duplications. For example, lacking an integrated national security space strategy and plan, the services developed their fiscal year 2004-09 program budget plans without clearly defined objectives and milestones for space activities. In addition, the National Security Space Architect’s assessment of defense and intelligence space programs’ planned budgets for fiscal years 2004–2009, was complicated by the lack of an integrated overall strategy with performance measures. Instead, the Architect relied on multiple policies, studies, architectures, and guidance to identify overall effectiveness goals. Without an overall space strategy, including results-oriented goals and performance measures, DOD cannot fully gauge its progress toward increasing the effectiveness of national security space activities. Moreover, it is not clear which DOD office will be responsible for assessing the efficacy of the Air Force as executive agent for space or evaluating progress in achieving performance goals, once they are established. Witnesses before the Space Commission expressed concerns about how the Air Force would treat space activities and the extent to which it would fully address the requirement that it provide space capabilities to the other services. Several organizations within the Office of the Secretary of Defense participate in ongoing oversight of space activities, including Offices of the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence); the Under Secretary of Defense (Comptroller); the Under Secretary of Defense (Acquisition, Technology, and Logistics); and the Under Secretary of Defense (Policy); and the Director (Program Analysis and Evaluation). While each office has oversight responsibilities for different aspects of space activities, no one office is charged with ensuring that the Air Force’s space program is having the desired results. DOD’s guidance on executive agents specifies that the principal assistant(s) in the Office of the Secretary should assess executive agents’ performance no less frequently than every 3 years, although it does not specify the mechanism to be used for the assessment. According to DOD officials, the principal assistants for the executive agent for space—the Air Force—are the offices named above, and the issue of how the progress of the Air Force as executive agent should be assessed is being discussed, and the process and content by which the national security space program will be independently evaluated or whether one office will be designated to lead such an independent evaluation has not been decided. In commenting on a draft of this report, DOD said that currently the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence has responsibility to establish policy and provide direction to the DOD components on command, control, communications and intelligence-related space systems and serves as the primary focal point for staff coordination within DOD and other government agencies. However, it is not clear from the comments whether this office will be tasked with oversight of activities of the Air Force as executive agent for DOD space. DOD has charged the Air Force with leadership responsibilities for space activities and has taken some actions that have the potential to improve its management ability. While DOD plans to increase investment in technology, has developed a new acquisition strategy, and has directed the services to begin some initial planning on the national security space cadre issue, more remains to be done to meet these long-term management challenges critical to success in national security space activities. In the area of creating a space cadre, however, DOD lacks an overall human capital strategic approach to manage the space forces, leaving the services at risk of developing human capital plans that do not meet the overall national security space needs of the department. Moreover, no time frames have been established for developing coordinated plans. Furthermore, the department does not have a complete results-oriented management framework to assess the results of the changes in its organization and processes and gauge its progress toward achieving its long-term goals in the future. Therefore, the services and Intelligence Community continue to develop national security space programs based on their own requirements without the benefit of overarching guidance on national security space goals, objectives, and priorities. Also, in its fiscal year 2000 performance report that accompanied its budget, the department did not include performance goals and measures for space activities, which would be a mechanism to highlight program progress and signal the relative importance of national security space activities. Although the Under Secretary of the Air Force, as DOD’s focal point for space, is responsible for leading the implementation of the national security space strategy and plan, questions have been raised about the extent to which the Air Force will fairly address the needs of the other services and defense agencies. Furthermore, DOD has not specified an oversight mechanism at the Secretary of Defense level to periodically assess the progress of the Air Force in achieving the department’s goals for space activities and in addressing the requirements of the other services and defense agencies. Without such oversight, it will be difficult for DOD to know whether the changes made are having the desired results of strengthening national security space activities. To improve the management of national security space activities, we recommend that the Secretary of Defense take the following actions: require the executive agent for DOD space, in conjunction with the services, to establish a departmentwide space human capital strategy that includes goals and time lines to develop and maintain a cadre of military and civilian space professionals; require the executive agent for DOD space to develop a comprehensive management framework for space activities that includes a results- oriented national security space strategy tied to overall department-level space goals, time lines, and performance measures to assess space activities’ progress in achieving national security space goals; include performance goals and measures for space activities in DOD’s next departmentwide performance report; and designate an oversight entity in the Office of the Secretary of Defense to periodically assess the progress of DOD’s executive agent in achieving goals for space activities. We further recommend that the Secretary of Defense direct the Secretaries of the Army, the Navy, and the Air Force to review, and as necessary, adjust service cadre plans to ensure they are linked to the department’s space human capital strategy when completed. In its comments on our draft report, DOD agreed with our recommendations to establish a departmentwide space human capital strategy; develop a management framework for space activities that includes a results-oriented national security space strategy tied to overall department-level space goals, time lines, and performance measures; include goals and measures for space activities in the department’s next performance report; and designate an oversight entity in the Office of the Secretary of Defense to assess the progress of DOD’s executive agent in achieving goals for space activities. In its comments, DOD stated that it is already in the process of developing strategies and plans to address the issues of strategic planning—including goals, time lines, and performance measures—and developing space professional personnel. DOD partially agreed with our recommendation that the military services’ space cadre plans be linked to the department’s space human capital strategy when completed, stating that the services are already drafting separate plans that will be synchronized and linked to an overall national security space plan, and that the services should not wait to complete their own plans. We agree that development of an overall plan can logically take place concurrently with service planning and have reworded our recommendation accordingly. The intent of our recommendation to develop an overall human capital strategy and service plans that are appropriately linked to the overall strategy is to ensure that the services and defense agencies provide adequate training to meet service and defensewide requirements. Furthermore, with an integrated approach, the service plans should offer training programs that minimize duplication of effort and reduce critical gaps of coverage to effectively create and maintain a capable space cadre across the department. DOD’s comments are included in this report in appendix III. DOD also provided technical clarifications, which we incorporated as appropriate. Our scope and methodology are detailed in appendix IV. We performed our work from June 2002 to February 2003 in accordance with generally accepted government auditing standards. Contacts and staff acknowledgements are listed in appendix V. We are sending copies of this report to interested congressional committees, the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Chairman of the Joint Chiefs of Staff; the Commander, U.S. Strategic Command; the Director, Defense Advanced Research Projects Agency; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512- 6020 if you or your staff have any questions concerning this report. The Secretary of Defense agreed with the Space Commission’s finding that the Department of Defense (DOD) needed a new and comprehensive national security space management approach to promote and protect U.S. interests in space. In a May 8, 2001, letter to the leaders of the defense and intelligence oversight committees, the Secretary informed Congress that he would take actions to improve DOD’s management structure and organization for national security space actions. These actions largely represented organizational and management changes the Space Commission recommended to improve DOD’s focus on national security space activities and better coordinate military and intelligence space activities. We reported in June 2002 that DOD had implemented or was in the process of implementing 10 of the 13 recommendations the Space Commission directed to it. At that time, DOD had completed action on six recommendations and was in the process of implementing four others. The Secretary of Defense chose not to implement three of the commission’s recommendations and instead opted to (1) establish a focal point for space within the Air Force rather than create an Under Secretary of Defense for Space, Information, and Intelligence; (2) increase the Air Force’s responsibilities by department directive rather than requesting legislative change; and (3) direct existing organizations to conduct innovative space research and development rather than create a new organization to do so. As table 2 shows, DOD has implemented or is nearing implementation of these 10 recommendations. DOD has completed actions to implement three recommendations that were categorized as “in progress” in our June 2002 report, as designated by the arrows in the table. Only the recommendation that the Air Force be named executive agent for DOD space remains to be finalized. However, the Air Force has taken on more leadership responsibilities over the last year based on a memorandum that expressed the Secretary’s intent to have the Air Force become the DOD executive agent for space. Event Space Commission report published. Secretary of Defense sent letter to Congress detailing intended actions. Air Force Space and Missile Systems Center realigned from Air Force Materiel Command to Air Force Space Command Secretary of Defense issued memorandum directing actions and time lines for implementing selected Space Commission recommendations. December 13, 2001 Under Secretary of the Air Force sworn in, after confirmation by the Senate, and appointed Director, National Reconnaissance Office, by the Secretary of Defense and the Director of Central Intelligence. Under Secretary of Defense (Acquisition, Technology and Logistics) promulgated policy memorandum directing DOD research community to undertake research and demonstration of innovative space technologies and systems. Under Secretary of the Air Force designated to be Air Force Acquisition Executive for space. February 14, 2002 Under Secretary of Defense (Acquisition, Technology and Logistics) delegated milestone decision authority for DOD major space programs to the Secretary of the Air Force with authority to redelegate to the Under Secretary of the Air Force. “Virtual” major force program for space included in DOD’s Future Years Defense Program. Commanding general assumed command of the Air Force Space Command separate from U.S. Space Command and North American Aerospace Defense Command. GAO interim assessment of the status of DOD’s reorganization of space activities. National Security Space Architect space program assessment. To update the status of actions the Department of Defense (DOD) has taken to implement the Space Commission’s recommendations, we identified and monitored changes in DOD’s organization and management of space by reviewing DOD and service briefings and internal department directives and memoranda that identified issues and directed initiatives for improving management of space activities. We held discussions with officials from the Offices of the Assistant Secretary of Defense (Command, Control, Communications and Intelligence) and the Under Secretary of Defense (Acquisition, Technology, and Logistics) and the Under Secretary of Defense (Comptroller/Chief Financial Officer) to discuss department guidance on implementing the recommendations and implementation activities. To identify actions the services took to improve management of space activities, we reviewed documentation of implementation actions and held discussions with Army, Navy, Air Force, and Marine Corps officials. Offices represented were the Under Secretary of the Air Force; the National Security Space Architect; the Air Force Space Command; the Air Force Space and Missile Systems Center; the 14th Air Force; the Army Space and Missile Defense Command; the Naval Network and Warfare Command; and Headquarters Marine Corps. Sites visited included the Pentagon, Washington, D.C; Peterson Air Force Base and Schriever Air Force Base, Colorado Springs, Colorado; Los Angeles Air Force Base, Los Angeles, California; and Vandenberg Air Force Base, Lompoc, California. The National Reconnaissance Office provided written answers to questions we submitted. To determine progress in addressing some of the long-term space management challenges, we discussed challenges DOD, the Space Commission, other experts, and our previous reports have identified with officials from the Office of the Secretary of Defense; the Army; the Air Force; the Navy; the National Security Space Architect; the U.S. Strategic Command; the U.S. Northern Command; the Joint Staff; and outside experts. Given time and resource limitations, we focused our work on three of the many long-term management challenges to DOD’s space program—investing in science and technology, improving the timeliness and quality of space acquisitions, and building and maintaining a cadre of space professionals. To assess progress in investing in technology, we reviewed documentation and held discussions with officials from the Defense Advanced Research Projects Agency; the Office of the Director, Defense Research and Engineering; the Office of Under Secretary of Defense (Acquisition, Logistics, and Technology); Naval Network and Warfare Command; the Naval Research Laboratory; and the Air Force Research Laboratory. To assess progress in implementing its acquisition initiatives, we reviewed documentation and discussed the initiatives with officials representing the Office of the Under Secretary of the Air Force and the Air Force Space Command. In addition, we discussed education and training initiatives with officials from the Air Force Space Command; Air University; Air Force Academy; the Army Space and Missile Defense Command; Army Command and General Staff College; the Office of the Chief of Naval Operations; the Naval Academy; the Naval Postgraduate School; and Headquarters Marine Corps. To assess whether DOD had a management framework that will foster the success of its improvement efforts, we reviewed departmental plans and strategies that set organizational goals and discussed oversight and management activities—including setting strategic goals, developing measures of progress, and planning time lines—with senior DOD and service officials from offices that have major responsibilities for managing space activities, including the Offices of Assistant Secretary of Defense (Command, Control, Communications and Intelligence), the Under Secretary of the Air Force, and the Air Force Space Command. We used the principles embodied in the Government Performance and Results Act of 1993 as criteria for assessing the adequacy of DOD’s management framework to effectively manage and oversee the space program. In addition to the names above, Margaret Morgan, MaeWanda Micheal-Jackson, Robert Poetta, and R.K. Wild made key contributions to this report.
In January 2001, the congressionally chartered Commission to Assess United States National Security Space Management and Organization--known as the Space Commission--reported that the Department of Defense (DOD) lacked the senior-level focus and accountability to provide guidance and oversight for national security space operations. Congress mandated that GAO provide an assessment of DOD's actions to implement the Space Commission's recommendations. Thus, GAO (1) updated its June 2002 assessment of DOD's actions to address the Space Commission's recommendations, (2) ascertained progress in addressing other long- term management concerns, and (3) assessed the extent to which DOD has developed a results-oriented management framework for space activities. Since June 2002 when we reported that DOD intended to implement 10 of the Space Commission's 13 recommendations to improve the management and organization of space activities and had completed implementation of 6, DOD has completed action on 3 more recommendations. The only action intended but not completed at the conclusion of our work is designation of the Air Force as the executive agent for DOD space programs. Most of the changes represent organizational actions to improve DOD's ability to manage space. For example, DOD has: (1) created a focal point for integrating DOD space activities by appointing the Under Secretary of the Air Force also as Director, National Reconnaissance Office; (2) realigned Air Force space activities under one command; and (3) created a separate position of Commander, Air Force Space Command, to provide increased attention to the organization, training, and equipping for space operations. It is too early to assess the effects of these organizational changes because new institutional roles, processes, and procedures are still evolving. DOD still faces challenges in addressing long-term management problems, such as increasing its investment in innovative space technologies, improving the timeliness and quality of acquisitions, and developing a cadre of space professionals. DOD has initiated some actions to address these concerns, such as increasing resources for research on space technology and developing a new acquisition process, and the services have begun some plans for developing space professionals. However, most planned actions are not fully developed or implemented. Further, DOD has not developed an overarching human capital strategy for space that would guide service plans to ensure all requirements for space professionals are met. DOD does not have a comprehensive, results-oriented management framework for space activities. The Air Force is developing some policies and guidance that could be part of a management framework for space activities. However, we did not have access to the draft documents to determine whether they will contain results-oriented elements--such as a strategy, performance goals and measures, and timelines--that will enable DOD to better focus its efforts and assess its progress in attaining its space goals. Further, no single department-level entity has been charged with providing oversight of the Air Force's management of its executive agent for space responsibilities to assess its progress in achieving space goals while ensuring that all services' requirements for space capabilities are fairly considered.
When a veteran submits a claim for disability benefits to a VBA regional office, Veterans Service Center staff process the claim in accordance with VBA regulations, policies, procedures, and guidance. A Veterans Service Representative (VSR) in a Pre-Determination Team develops the claim; that is, assists the claimant in obtaining sufficient evidence to decide the claim. The claim then goes to a Rating Team, where a Rating Veterans Service Representative (also known as a Rating Specialist) makes a decision on the claim, based on the available evidence and VBA’s criteria for benefit entitlement. VSRs also perform a number of other duties, including establishing claims files, authorizing payments to beneficiaries and generating notification letters to claimants, conducting in-person and telephone contacts with veterans and other claimants, and assisting in the processing of appeals of claims decisions. VBA’s administrative costs, including personnel costs, are funded through VA’s General Operating Expenses account. VBA, as part of VA’s annual budget justification, asks for specific amounts for each of its programs, including compensation and pension programs. Funding is requested to support an estimated full-time equivalent (FTE) employment level. In fiscal year 2003, VBA spent about $878 million to administer its compensation and pension programs. This funding included support for about 9,350 FTEs. From fiscal year 1998 through 2003, staffing levels for VBA’s compensation and pension programs increased significantly, particularly for staff who process compensation and pension claims at VBA’s 57 regional offices, as shown in figure 1 below. In fiscal year 1998, VBA had 6,770 compensation and pension FTEs; by fiscal year 2003, employment had increased by about 38 percent to 9,352 FTEs. Compensation and pension FTE levels rose by about 900 in fiscal years 2001 and 2002. Staffing levels increased because VBA hired hundreds of new rating specialists and VSRs in anticipation of a large number of future retirements. Also, these additional staff helped VBA respond to a sharp drop in the production of rating- related claims decisions in fiscal year 2001. In fiscal year 2002, these decisions rose from about 481,000 to about 797,000, and to about 827,000 in fiscal year 2003. In fiscal year 2003, VBA’s 57 regional offices received about 735,000 rating- related claims from veterans and their families for disability benefits. This included about 167,000 original claims for compensation of service- connected disabilities (injuries or diseases incurred or aggravated while on active military duty) and about 434,000 reopened compensation claims. In addition, about 90,000 original and reopened claims were filed for pensions for wartime veterans who have low incomes and are permanently and totally disabled for reasons not service-connected and for their survivors. In addition, VBA received about 28,000 original claims for dependency and indemnity compensation by deceased veterans’ spouses, children, and parents and to survivors of service members who died on active duty. VBA officials stated that productivity improvements, workload changes, and attrition of experienced claims processing staff are considered throughout the annual budget process. However, VBA’s budget justification did not clearly explain how these factors affected its request. Early in this process, the Compensation and Pension Service makes a budget request that is reviewed by VBA’s Office of Resource Management, under the direction of VBA’s Chief Financial Officer and becomes part of VBA’s total request. VBA’s request eventually becomes part of VA’s overall budget request, which is submitted to the Office of Management and Budget (OMB) for review. VBA’s fiscal year 2005 budget justification identified a number of initiatives and projections that could affect its staffing levels. For example, implementing specialized claims processing teams in VBA’s regional offices and consolidating pension maintenance work at three regional offices could affect staffing levels. Also, VBA projected it would receive more disability compensation claims than in previous years, based on such factors as the enactment of concurrent receipt legislation in 2003. Specifically, the fiscal year 2005 budget justification stated that VBA expected to receive about 65,000 claims because of the enactment of legislation that allows military retirees with service-connected disabilities rated at 50 percent or higher to receive both VA disability compensation and military retirement pay. VBA officials said that this estimate was included in their negotiations with OMB. Further, VBA noted that it expects many experienced claims processing staff to leave VBA over the next several years. Despite identifying these factors in its 2005 budget justification, VBA does not specify how such initiatives and projections will affect the number of employees it needs to meet its claims processing performance goals. For example, VBA projected that in fiscal year 2005, the number of original and reopened compensation claims receipts would increase by about 15 and 10 percent respectively from its fiscal year 2004 estimates, and that original and reopened pension receipts would decrease by about 2 percent. However, VBA did not specifically identify how these anticipated workload trends had affected its requested staffing levels or its expected improvements in productivity. VBA’s reduced staffing request was consistent with OMB guidance to agencies to assume increased productivity in their budget requests—for example, to do the same amount of work with fewer employees. However, the budget justification does not describe how its FTE staffing requirements are linked to the specific initiatives and projections that could affect these needs. Also, VBA’s fiscal 2005 budget justification provides no specific information on its compensation and pension claims processing productivity or on its planned improvements in productivity. VBA expressed confidence that it can improve productivity enough to meet its claims processing goals for fiscal year 2005 with fewer employees, despite a projected increase in the workload of compensation claims. To achieve expected improvements in timeliness and accuracy with fewer employees, while receiving more claims for disability compensation, VBA’s claims processing operations will need to become more productive. However, the budget justification included no measurement of productivity, nor did it identify how it planned to achieve the needed productivity improvements. Finally, VBA’s fiscal year 2005 budget justification does not explicitly show the impact of budget decisions to shift funding away from initiatives that could improve productivity; such decisions were based on VBA’s emphasis on meeting the Secretary’s 100-day timeliness goal for deciding rating- related claims. According to VBA officials, in fiscal years 2002 and 2003, nonpayroll funds were shifted to help fund increased FTE employment levels in VBA regional office Veterans Service Centers, which are responsible for processing compensation and pension claims. This was done to increase the number of rating-related claims being decided and to meet the Secretary’s fiscal year 2003 goals for improving timeliness and reducing the backlog of undecided claims. For example, VBA used nonpayroll funds to help support about 300 more FTEs than it had originally requested for fiscal year 2002, and about 400 more FTEs than it had originally requested for fiscal year 2003. Specifically, in fiscal year 2002, VBA requested funding for 7,351 compensation and pension FTEs but reported that it actually used 7,663, and in fiscal year 2003, VBA originally requested funding for 7,532 FTEs but reported that it actually used 7,936. According to VBA officials, nonpayroll funds were shifted to help pay increased payroll costs associated with this higher FTE level. In addition, the fiscal year 2003 budget request assumed a 2003 pay raise of 2.6 percent, but the actual pay raise was 3.1 percent. VBA’s fiscal year 2004 and 2005 budgets reflect continued efforts to support as many FTEs as possible through reductions in nonpayroll funding to continue to support improvements in claims processing timeliness. VBA officials identified training and information technology initiatives that have been delayed because of these cuts in nonpayroll funds. These include delays in developing new Training and Performance Support Systems (TPSS) modules and in updating existing TPSS modules to reflect changes in laws, regulations, and procedures. According to its fiscal year 2005 budget justification, VBA is relying on TPSS to improve productivity by helping new claims processing employees develop needed proficiency more quickly and by helping experienced employees maintain their proficiency. Delays in the progress of TPSS implementation could affect VBA’s productivity, because existing modules may not be as useful as revised modules could be, and advanced modules may continue to be unavailable. VBA requested about $2.6 million for TPSS implementation in fiscal year 2005, including funding to update some existing modules. However, VBA did not explain the impact of delays in developing new training modules and updating existing modules. Another delayed initiative that could improve productivity is Virtual VA. This initiative involves the scanning of paper records into electronic claims folders. VBA expects efficiency and timeliness to improve when Virtual VA is fully implemented, in part because electronic claims folders could be transferred among regional offices more quickly. VBA has implemented Virtual VA at its three Pension Maintenance Centers. However, VBA requested fiscal year 2005 funding only to maintain the existing Virtual VA program and anticipates that funding will not be available to expand the program beyond the Pension Maintenance Centers. VBA’s justification stated that full implementation of Virtual VA would help improve claim processing and identified the need for additional staff to convert existing paper claims files to electronic format, such as for document preparation and scanning. However, VBA did not request these additional staffing resources and did not explain why. The budget justification stated that VBA expected no improvements in performance because of implementation of Virtual VA at regional offices in fiscal year 2005, but it did not identify how much productivity would be forgone because of VBA’s decision to delay Virtual VA implementation. The Congress relies on the budget justification as VBA’s statement of how it plans to spend the funds it requested. The House and Senate Appropriations Committees have noted that VA’s budget justification represents the agency’s budget plan. VA’s authorizing committees also rely on VBA’s budget justification in conducting their oversight. In February 2004, both the Senate and House Veterans’ Affairs Committees held hearings on VA’s fiscal year 2005 budget request. Each committee then recommended funding levels to its respective Budget Committee. The Appropriations Committees also conduct oversight of VA through the annual budget process. Congressional oversight could be enhanced if VBA’s budget justifications were more transparent. VBA estimated the number of rating-related claims it would receive in fiscal year 2005 based on historical trends and judgments about the likely impacts of various factors on receipts, but it did not project claims complexity, such as average disabilities per claim. For example, VBA expected an increase in the number of claims received based on the enactment of legislation allowing some military retirees to receive both military retirement pay and VA disability compensation. Also, VBA officials stated that they factored in the return of veterans from operations in Iraq and Afghanistan, but they were unclear as to how many claims VBA expected to receive from these veterans. Previous VBA projections have been mixed in their accuracy. For fiscal years 2000 through 2004, VBA’s projections of rating-related claims receipts varied from an underprojection of about 11 percent to an overprojection of about 19 percent, as shown in table 1. In its fiscal year 2004 budget justification, VBA projected that it would receive an average of about 57,300 rating-related claims per month. For its fiscal year 2005 budget justification, VBA revised its fiscal year 2004 projection to an average of about 63,900 receipts, based on actual receipts for October and November 2003. VBA’s revised projection underprojected by only about 0.5 percent; actual fiscal year 2004 receipts averaged about 64,300 per month. VBA is working to improve its ability to project its rating claims workload by more accurately estimating the number of such claims it will receive. In June 2000, VBA received the first version of a model for forecasting original and reopened compensation claims receipts, developed under contract by the Institute for Defense Analyses. This model factored into its projections the changing size and demographics of the veteran population. Specifically, the model used historical claim submission data and projections of the veteran population to project VBA’s future workload. Although the model was updated in June 2002, its usefulness is limited by several factors. For example, it projects only original and reopened compensation claims and relies on outdated veteran population data. According to a VBA official, VBA’s workload projections for its budget justifications were not based on this model, but the results of the model were used to check VBA’s projections. An expanded model with more recent information is scheduled to be delivered in December 2004. The expanded model will project workload for more types of claims, including all rating-related claims, and will be updated to reflect the 2000 Census. VBA did not project the complexity of its rating-related claims in its fiscal year 2005 budget submission and did not explain the impact of complexity on productivity and requested staffing levels. VBA has noted that disability compensation claims have become more complex because veterans are claiming more service-connected disabilities per claim, and VBA must make a decision whether each disability is service-connected. Meanwhile, the Congress and VA have established presumptions of compensation and pension eligibility that can make some claims less complex. For example, the Congress and VA have identified several types of disabilities (such as type II diabetes) as service-connected based on the presumption that veterans who served in Vietnam were exposed to Agent Orange. Claims based on these disabilities can be simpler to decide because less evidence is needed to prove service connection. VBA did not specifically explain the impact of claims complexity on productivity and staff requirements. VBA provided some data on average number of disabilities for completed compensation claims in its fiscal year 2005 budget justification. However, these data were based on incomplete information. The average number of disabilities per claim was based on calendar years 1998 through 2001 data on completed claims from VBA’s software application for preparing rating decisions, Rating Board Automation (RBA). According to a VBA official, the RBA data were incomplete because data on many rating decisions were not transmitted to VBA’s central database for analysis. For example, according to a VBA official, employees who were working from home did not always upload rating information from computer disks into RBA and send the data to VBA’s central database. Also, because making corrections to a rating once it had been entered into the central database was cumbersome, corrections were not always made to the incorrect information that had been entered in the database. VBA began implementing a new rating decision preparation package (RBA 2000) in October 2000. While VBA officials stated that RBA 2000 provides more complete data on rating decisions, it cannot provide data by the end product code, which VBA uses to identify types of claims (for example, original and reopened compensation claims). VBA officials suggested that, in the future, it could measure issues per claim through its new claims development software application, MAP-D. VBA is not planning to provide information on disabilities per claim in its fiscal year 2006 budget justification. It is difficult to determine whether VBA’s confidence that it can meet its key fiscal year 2005 claims processing goals is well founded because its budget justification lacks sufficient information to make such an assessment. VBA set ambitious goals for providing veterans and their families with more timely decisions. At the same time, VBA expects the volume of incoming rating-related claims to increase and to lose experienced claims processing staff to attrition. Nonetheless, VBA requested a reduction in claims processing staff in fiscal year 2005, on top of a decrease in fiscal year 2004. VBA’s budget justification does not clearly explain how its estimated staffing requirements will be affected by its proposed initiatives to improve efficiency and accuracy, projected increases in compensation claims, and staff attrition. To achieve its goals in the face of increasing workloads and decreased staffing, VBA will have to rely on productivity improvements. However, its budget justification does not provide information on VBA’s claims processing productivity or how much VBA expects to improve productivity. Consequently, it is difficult to determine if VBA can achieve the productivity improvements it needs or determine how these improvements will be achieved. While VBA’s budget assumes improved productivity, the agency has made budget decisions to delay initiatives that could help improve productivity, in order to protect funding for claims processing staff to help meet its top short-term priority—improving timeliness. Its budget justification could have provided more information on the impacts of decisions to delay these initiatives. Further, VBA’s budget justification did not clearly explain the effects on productivity of claims complexity, such as changes in the average number of disabilities per claim. Consequently, the effect of complexity on VBA’s workload and staffing requirements is unclear. A more transparent budget justification would better inform the Congress’ oversight of VBA, by making it easier to evaluate whether the agency’s administrative budget requests adequately reflect the resources, particularly staff, needed to achieve expected performance. To assist the Congress in its oversight of VBA’s compensation and pension claims processing operations, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Benefits to prepare the following information and work with the Committees on Veterans’ Affairs and the Appropriations Subcommittees on Veterans Affairs, Housing and Urban Development, and Independent Agencies on how best to make it available for their use: explanation of the expected impact of specific initiatives and changes in incoming claims workload on requested staffing levels; information on claims processing productivity, including how VBA plans to improve productivity; and explanation of how claims complexity is expected to change and the impact of these changes on productivity and requested staffing levels. In its written comments on a draft of this report (see app. II), VA concurred with our recommendation. VA noted that VBA will work closely with VA's Office of Budget, OMB, and congressional authorizing and appropriating committees and subcommittees to ensure that appropriate supporting information is included in its future budget justifications. We will send copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. The report will also be available at GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please call me at (202) 512-7215 or Irene Chu, Assistant Director, at (202) 512-7102. In addition to those named, Amy Buck, Denise Fantone, Martin Scire, Greg Whitney, and Gregory Wilmoth made key contributions to this report. To assess how the Veterans Benefits Administration (VBA) determined and justified its staffing requests, we focused on VBA’s fiscal year 2005 budget justification to the Congress: specifically, its requests for discretionary administrative funding for VBA’s compensation and pension programs. We also reviewed VBA’s budget justifications for fiscal years 2000 through 2004 to identify funding and staffing trends and obtain background information on specific management initiatives and workload trends. We reviewed Office of Management and Budget guidance to agencies on how to prepare their fiscal year 2005 budget requests. In particular, we reviewed guidance on information to be included in budget requests, estimating staffing levels, and the budget formulation process. In addition, we interviewed VBA officials to identify the role of productivity and workload factors in VBA’s internal budget process and to discuss the fiscal year 2005 request. Specifically, we interviewed VBA officials responsible for compensation and pension programs, resource management, and field operations. In some instances, we relied on testimonial evidence from our interviews, along with written responses to detailed questions. To review VBA’s fiscal year 2005 receipts projections, we interviewed Compensation and Pension Service officials responsible for these estimates. We obtained records showing the workload data used to estimate receipts for fiscal years 2004 and 2005 as well as the adjustments VBA made to historical trends in developing its estimates. To assess the accuracy of receipts estimates for fiscal years 2000 through 2004, we reviewed VBA’s budget justifications for those fiscal years. For fiscal years 2000 through 2003 we compared initial estimates of rating-related claims for each fiscal year with actual VBA-wide receipts reported in VBA’s budget justifications. We focused on rating-related claims because they represent the types of claims VBA uses to develop key performance measures, such as timeliness (average days to complete rating-related claims). For fiscal year 2004, we compared VBA’s estimate in its budget justification with VBA’s Distribution of Operational Resources (DOOR) report of receipts for the fiscal year. In addition, we reviewed documentation of the Institute for Defense Analyses (IDA) receipt estimation model and discussed the model with IDA and Compensation and Pension Service officials. Because VBA did not use this model to develop the compensation claims receipt estimates in its fiscal year 2004 and 2005 budget justifications, we did not conduct a detailed analysis of the model. In our discussions, IDA officials identified improvements in its model, such as projecting receipts for additional types of claims and using updated population data. We assessed the reliability of end product data in VBA’s Benefits Delivery Network (BDN). The end product code is a key data element because it identifies the type of claim. VBA’s DOOR reports aggregate workload data from the BDN by end product. We reviewed 1997 and 1998 Department of Veterans Affairs (VA) Inspector General reports that identified significant control deficiencies in BDN, leading to questionable reliability of workload and timeliness data. We reviewed VBA’s system for identifying potentially erroneous end product transactions that might lead to inaccurate workload data. VBA adopted this system in response to the Inspector General’s findings. We interviewed the VBA official responsible for sampling transactions to identify questionable end product instances— where a regional office may have improperly taken credit for completing a claim or for completing a claim in less time than was actually required. For example, this sample is designed to identify when a regional office has taken credit for more than one decision on the same claim, leading to overcounting of decision production. We also reviewed sample data from fiscal year 1999 through the second quarter of 2004. We also reviewed how VBA’s Systematic Technical Accuracy Review (STAR) program identifies questionable and erroneous end product codes. If a STAR reviewer determines that the end product code for a randomly sampled claim file is questionable or erroneous, the claim will be removed from the STAR sample and be replaced by another claim with the same end product code. For example, if a claim is identified as completed in BDN but no decision has been made on the claim, the claim is removed from the STAR sample. We interviewed a VBA official responsible for STAR and reviewed data on claims removed from the STAR sample in fiscal year 2003 and the first half of fiscal year 2004. We determined that VBA’s end product data are sufficiently reliable for the purposes of this report, which focuses on VBA-wide data. For example, VBA’s sampling shows a decline in questionable end product codes from the second quarter of fiscal year 2003 to the second quarter of fiscal year 2004, from about 5.2 percent to 2.8 percent. However, we are aware that BDN is an aging information system. In its October 2001 report, VA’s Claims Processing Task Force noted this and recommended that VBA maintain BDN until the replacement VETSNET system is fully implemented. However, VBA officials stated that VBA was not planning to make significant investments in maintaining BDN because it will be replaced. We interviewed VBA officials about the reliability of its Rating Board Automation (RBA) system as a source of data on average disabilities per claim. These officials noted that many rating decisions were not included in the RBA data used in VBA’s fiscal year 2005 budget justification. On the basis of this, we determined that the data on average disabilities per compensation claim in VBA’s budget justification were not reliable, and we do not use the data in our report. Finally, we did not assess the reliability of the full-time equivalent data VA reported in its budget submissions.
The Chairman and Ranking Minority Member, Senate Committee on Veterans' Affairs, asked GAO to assist the committee in its oversight of the Veterans Benefits Administration's (VBA) disability compensation and pension programs. This report examines (1) VBA's determination and justification of claims processing staffing levels, and the role of productivity in such determinations, and (2) VBA's projections of future claims workload and complexity. VBA's fiscal year 2005 budget justification did not clearly explain how the agency would achieve the productivity improvements needed to meet its compensation and pension claims processing performance goals with fewer employees. According to VBA officials, productivity improvements, workload changes, and employee attrition were considered in developing its fiscal year 2005 budget request. While some of these factors were identified in VBA's budget justification, they were not linked to the requested full-time equivalent (FTE) employment levels. Also, VBA's justification did not specifically address its claims processing productivity or how much VBA planned to improve productivity. Finally, VBA does not explain the impacts of VBA budgetary decisions on long-term productivity. VBA officials identified information technology improvements and training programs that could help improve productivity but have been delayed because VBA shifted funding from these initiatives to support higher staffing levels. This was done to help meet VBA's shorter-term goal to improve claims decision timeliness, in particular the Secretary of Veterans Affairs' goal to reduce decision time for rating-related claims to an average of 100 days. More transparent budget justifications would better inform congressional oversight of VBA by making it easier to evaluate whether the agency's budget requests reflect the resources, particularly staffing, needed to achieve expected performance. V BA estimated the number of claims it expects to receive (receipts) in fiscal year 2005 based on historical workload trends, with adjustments for factors that could affect future receipts, notably the impact of legislation allowing some military retirees to concurrently receive Department of Veterans Affairs (VA) disability compensation and military retirement pay. The accuracy of VBA's projections of rating-related receipts for fiscal years 2000 through 2004 was mixed, varying from underprojecting by about 11 percent to overprojecting by about 19 percent. Actual receipts in fiscal year 2004 exceeded VBA's projections. Meanwhile, VBA did not project claims complexity in its fiscal year 2005 budget justification and did not explain how it expected claims complexity to affect its productivity and requested staffing levels. A claim's complexity can be affected by many factors, such as the number and types of disabilities claimed. VBA's budget justification could be improved if the agency explained how changes in complexity affect workload and staffing requirements.
The FDLP legislation was enacted in August 1993 as part of a broader reform of the federal student loan programs. The first direct loans were made in fiscal year 1994. FDLP makes it possible for students and their families to borrow directly from the federal government through the colleges or other postsecondary institutions the students attend. As of September 30, 2001, about 3.6 million borrowers were repaying more than $45 billion in direct loans. Education services FDLP loans through a contract with Affiliated Computer Services, Inc. (ACS), an information technology systems and services company. As prime contractor, ACS has overall responsibility for FDLP loan servicing. ACS has a subcontract with Academic Financial Services Association Data Corporation (AFSA), under which AFSA has the main responsibility for FDLP loan-servicing operations. Education has an interagency agreement with Treasury for processing direct loan payments. Treasury, in turn, has agreements with and compensates certain commercial banks for processing both paper and electronic payments made by the public to federal agencies. Treasury bills federal agencies only for those services that it considers outside the basic level of service negotiated with the designated commercial banks. In fiscal year 2000, Treasury charged Education $26,353 for these ancillary services. Specifically, this amount was charged for the cost of shipping reports and other material by overnight mail to Education. In February 1998, Education implemented EDA to allow FDLP borrowers to have their loan payments automatically withdrawn from a bank account each month. Then, in November 1999, Education began offering a 0.25- percentage point reduction in the interest rate to borrowers who agreed to repay their loans this way. The number of borrowers who made their loan payment through EDA went from 40,023 in October 1999, before the discount went into effect, to 364,704 in September 2001. The cost justification model Education developed used eight key assumptions. These assumptions included such things as the interest rate charged to borrowers, the number of outstanding loans, and two assumptions concerning borrower behavior—estimates of how many borrowers would likely enroll in the program once it was established and the likelihood that borrowers would continue to prepay their loans after enrollment. As the basis for developing these assumptions, Education relied on a variety of factors, including prevailing Treasury interest rates, private sector experiences with electronic debit repayments, conventions economists generally use in the absence of data, and analysis of its student loan portfolio. Table 1 shows the eight key assumptions used in Education’s cost justification model and the basis of each assumption. Borrowers have the right to prepay their loans. If a borrower repays any amount in excess of the amount due, the excess amount is a prepayment. Loan repayments, including prepayments, are credited first to any accrued charges or collection costs and then to outstanding interest and principal. Because prepayments generally reduce a borrower’s principal balance outstanding, the amount of interest that accrues in subsequent months is also reduced, decreasing the amount of interest the borrower pays over the life of the loan. Borrowers who pay by check can easily make repayments in excess of the amount due, for example, by rounding up their repayment, but EDA borrowers have to take extra steps to prepay because only the scheduled repayment amounts are withdrawn from EDA borrowers’ accounts. In practice, the amount due without regard to the 0.25 percent discount is withdrawn. Therefore, EDA borrowers do not receive a reduction in the amount they repay each month, but more of each repayment is applied to the principal balance and they will repay their loans faster as a result. A variety of factors can affect a borrower’s decision about whether to prepay a loan. Given that FDLP interest rates for direct loans cannot exceed 8.25 percent and the interest paid is tax deductible for borrowers who do not exceed certain income limits, prepaying may not be the best option for all borrowers. For instance, borrowers who have recently entered the workforce when they begin repaying their loans may not have sufficient resources to prepay their loans. Rather than prepay their direct student loans, some borrowers may also decide to instead repay any higher-interest debt they have accumulated, for which interest paid is not tax deductible, such as credit card debt. While more than twice as many borrowers have enrolled in the EDA program than originally assumed, the percentage of EDA borrowers who have continued to make prepayments remains unknown. In developing its cost justification of the EDA program, Education assumed that a certain percentage of borrowers would likely enroll in the program and that a certain percentage of these borrowers would continue to prepay their loans. Education based these assumptions on reported private sector experiences with electronic debit repayments and on conventions economists use in the absence of data. Education lacks data showing borrowers’ prepayment patterns before and after enrolling in the program, thus it cannot determine the extent to which its assumption has materialized. Education’s assumption that 5 percent of direct loan borrowers would enroll in the EDA program was an estimate based on the experiences of large, national private sector guaranteed loan lenders’ programs similar to EDA. As of September 2001, the actual percentage of EDA enrollees is closer to 12 percent, which, according to Education’s cost justification, would increase the savings to the government to over $19 million. Table 2 shows government savings at the originally assumed 5 percent enrollment rate and our estimates of the savings that Education’s cost justification model would project if higher EDA enrollment rates were to materialize, keeping the prepayment assumption constant. Education also obtained information from private sector lenders on their experiences with continued prepayment by borrowers after enrolling in EDA-like programs. Private lenders reported mixed results. For example, one lender reported that 20 percent of borrowers who previously prepaid continued to prepay after enrolling in such a program while another lender reported that 80 percent continued prepaying, according to Education officials. Given the wide variance in reported experience, Education officials concluded that they could not make an assumption based on these data. Therefore, Education assumed a random distribution of borrowers likely to continue to prepay, with 50 percent of those who had prepaid continuing to do so and 50 percent discontinuing prepayment. While we were able to determine the extent to which Education’s assumption about EDA enrollment materialized, we were unable to determine the extent to which its assumption for continued borrower prepayment materialized. Limitations in Education’s Direct Loan Servicing System prevented us from obtaining data on borrower history of repayment activity. Education can identify borrowers who are paying their loans ahead of schedule and, therefore, likely to be prepaying. However, it cannot identify EDA participants from this data and it lacks trend data showing how frequently and by how much borrowers prepay their loans. Individual borrower payment activity data are available for only the most recent 2 months. Given that borrowers change their prepayment patterns at their convenience throughout their loan repayment period, these data would not have covered a long enough time period to determine how prepayment patterns have changed. Consequently, we could not compare the overall patterns of borrowers’ prepayment behavior before or after enrolling in the EDA program. Education has not informed borrowers of the possible cost implications of EDA participation nor has it systematically informed borrowers of their prepayment options. Education has not told borrowers that because repayment through EDA may take longer, they may incur more interest cost over the life of the loan than if they previously prepaid without EDA. While Education has made some information available to borrowers online about where to send supplemental repayments, it has not systematically informed all borrowers of their prepayment options. Further, Education has not updated its borrower publications to inform borrowers of the option and benefits of repaying their loans through EDA. Education has not taken steps to inform EDA borrowers that—even with a reduced interest rate—they could pay more interest over the life of the loan. This could happen if prior to enrolling in EDA, they made repayments that exceeded the scheduled amount due, but after enrolling paid only the amount due. When borrowers establish an EDA, there is no place on the application form to designate an amount in addition to the scheduled payment to be withdrawn each month. To continue their prepayments, such borrowers would have to send a check for any prepayment or make arrangements to continue making prepayments through EDA. The Higher Education Act of 1965, as amended, requires that student loan borrowers be informed that they may prepay all or part of their loans at any time without penalty, but it does not require the disclosure of specific prepayment options. In documents such as the master promissory note and borrower publications, Education informs borrowers that they may prepay their loans. In May 2001, after we began our work, Education added information to the direct loan servicing Web site indicating where EDA borrowers wishing to prepay their loans could send supplemental payments. While this information may help borrowers with Internet access, Education has not disclosed this information in EDA brochures, the EDA application, or the confirmation notice sent to borrowers who establish EDAs. Further, Education does not inform EDA borrowers that they may make routine prepayments, by contacting the direct loan servicer at any time and increasing the amount withdrawn from their bank account each month. In addition to not disclosing prepayment options, Education had not updated two of its borrower publications to fully reflect the option borrowers have to repay through EDA. One publication, Exit Counseling Guide for Borrowers, does not provide details about how EDA works, the advantages of EDA for making loan payments, or the reduced interest rate EDA borrowers receive. The other publication, Repayment Book, which is available to help borrowers understand and select from the available repayment plans, makes no reference to EDA. Education and Treasury achieved administrative cost savings because EDAs reduced the costs associated with billing and processing payments. Education saved an estimated $1.5 million in fiscal year 2001 as a result of generating and mailing fewer bills to EDA borrowers. Additional savings are also possible with respect to costs associated with servicing past due accounts. Treasury, which processes direct loan payments and incurs most of the associated processing costs, saved an estimated $1.2 million in fiscal year 2001. As a result of EDA, Education reduces administrative costs associated with generating and mailing billing statements to borrowers. According to our review of Education cost data, in fiscal year 2001, Education saved about $1.5 million or $0.39 per month for each borrower who used EDA. This savings includes the cost of things such as the paper billing statement, the mailing envelope, and postage. Through EDA, Education avoided sending out more than 3.6 million billing statements over the course of fiscal year 2001. The other administrative costs Education incurs for servicing direct loan accounts are the same for all borrowers, regardless of their payment method. Table 3 shows the specific costs Education incurs for routine servicing of FDLP accounts. EDA should result in additional administrative cost savings by reducing the potential for late payments and accompanying collection efforts, according to an Education official. Some of the administrative savings Education achieves with EDA are offset with expenses that Education incurs at Treasury. Education pays Treasury for processing EDA applications. In fiscal year 2000, Treasury charged Education about $128,900 for processing 253,000 EDA applications. In the course of doing our work, we identified a potential opportunity for additional administrative cost savings unrelated to EDA. Education adheres to a price structure for servicing delinquent accounts that may not be appropriate. Currently, the direct loan servicer assesses Education a separate fee for each day a borrower’s account is at least 1 day past due. This fee applies to all late direct loan payments, but because EDA payments are credited on the due date—provided sufficient funds are available in the borrower’s bank account—this fee would generally not apply to EDA borrowers. The late fee Education is assessed for past due accounts covers additional work the direct loan servicer performs, such as sending second billing statements to borrowers, and making reminder phone calls. These collection activities occur at regularly scheduled intervals as part of Education’s default prevention initiatives. As previously stated, Education is assessed a fee for each day a borrower’s account is at least 1 day past due. Education officials stated that this contract provision has been in place since FDLP implementation. In the past, the direct loan servicer sent late payment notices to borrowers as soon as payments were one day late. However, according to Education officials, borrowers who had already mailed their payments found these notices confusing. As a result, Education decided to delay late payment notification to allow additional time to receive those payments made by borrowers close to or on the due date. Presently, the direct loan servicer’s first collection activity—sending a second billing statement—does not take place until a payment is 7 days late. However, Education is still assessed fees on payments that arrive 1 to 6 days late. In fiscal year 2001, Education paid $12.2 million or about $0.05 per day for each account that was at least 1 day past due. Because of limitations of data in the DLSS, Education is unable to determine the extent to which it is paying this fee each month for payments received between 1 and 6 days late. In fiscal year 2001, we estimate Treasury, which has an interagency agreement with Education to process direct loan payments, saved about $1.2 million as a result of EDA. These savings are based on the dollar volume of payments received. Treasury estimates that processing payments electronically costs less than 1 percent of the cost of processing paper payments. For example, it costs about $16 to process $1 million through EDA; processing the same amount in paper payments costs about $1,897. According to officials from Treasury’s Financial Management Service, Treasury processes payments for federal agencies to ensure efficient and timely processing of payments, and because Treasury can achieve economies of scale by providing this service throughout the federal government. Regardless of the conclusions Education reached in its cost justification, borrowers who enroll in EDA will benefit from paying a reduced interest rate on their loans and the federal government will achieve administrative cost savings. Data limitations make it difficult to assess whether borrowers have changed their prepayment behavior as Education assumed in its cost justification, and thus, the extent of the benefit for both borrowers and the federal government is unknown. Even if data were available and showed borrowers’ had changed their behavior, it would not tell us that this behavior changed as a result of entering EDA. Rather, borrowers could be making sound economic decisions such as choosing to prepay a higher rate loan rather then their federal student loan. By fully informing borrowers of the consequences of paying through EDA as well as their prepayment options, Education could ensure that borrowers have all the information they need to make sound economic choices. However, the limited disclosures Education currently makes to borrowers concerning their prepayment options under EDA are not sufficient to ensure that borrowers have all essential information to make informed decisions. Education does not make clear that, in spite of the 0.25 percentage-point interest rate reduction, borrowers might incur more interest cost over the life of their loans under EDA than they would if they continued to sometimes make payments in excess of the scheduled amount due. Although Education did not include estimated administrative cost savings associated with EDA in conducting its cost justification, clearly, these savings would help offset the expense of offering borrowers a reduced interest rate. EDA can further reduce administrative costs associated with loan processing if more borrowers use it. Education has not promoted the benefits of EDA to borrowers as much as possible to maximize administrative cost savings to the federal government. Promoting the benefits of EDA to borrowers when they are considering their repayment options could achieve even greater administrative cost savings if more borrowers were to participate in EDA as a result. Moreover, EDAs should reduce the amount of higher fees that Education incurs for servicing past due accounts, because EDA payments are generally credited on time. Although not related to EDA, Education may be able to achieve additional administrative cost savings. At present, Education is paying a fee for servicing EDA and non-EDA accounts that are at least 1 day past due. We believe that those fees may be unjustified because no action is taken to collect late payments until they are 7 days past due. To help make the EDA program more useful and understandable to borrowers and take greater advantage of its potential savings to the taxpayer, we are making several recommendations to the secretary of education. To better publicize EDA and help Education achieve additional administrative cost savings, we recommend updating the Exit Counseling Guide for Borrowers to reflect the repayment incentives for direct loan borrowers who repay their loans through EDA as well as borrowers’ prepayment options. To address concerns that borrowers may unknowingly pay more total interest over the life of their loans by not making prepayments if they make their loan payments through EDA, we recommend Education take steps to inform EDA borrowers about steps they can take to prepay their loans. Such steps could include modifying EDA applications to allow borrowers interested in prepaying their loans to designate withdrawal amounts in excess of their scheduled payments when they initially complete the EDA application. To ensure that the fees Education pays for servicing delinquent accounts appropriately reflect current collection activity practices, we recommend Education consider renegotiating the fee provision in its contract with the direct loan servicer to eliminate the servicing fee for accounts with payments less than 7 days late. In comments we obtained, Education generally agreed with the information presented in the report. In response to our recommendation, Education said that it would explore updating the Exit Counseling Guide for Borrowers and explore taking other steps to better inform borrowers of their prepayment options. In addition, Education said it would consider renegotiating the direct loan servicing contract to move in the direction of paying for results rather than processes. Education also provided technical comments, which we incorporated where appropriate. We are sending copies to the secretary of education, the secretary of the treasury, and the director of the Office of Management and Budget and will also make copies available to others on request. This report is also available on GAO’s home page at http://www.gao.gov. If you or your staff have any questions or wish to discuss this material further, please call me at (202) 512-8403 or Jeff Appel at (202) 512-9915. Other staff who made key contributions to this report include Barbara Alsip, Joel Marus, Scott McNabb, and Debra Prescott.
Since 1999, the Department of Education (Education) has offered a 0.25 percent interest rate reduction to borrowers who agree to an electronic debit (EDA) program. Borrowers pay a lower interest rate, while the federal government receives fewer late payments. Any revenue loss to the federal government from a reduced interest rate would be more than offset by a gain in revenue because some EDA borrowers who had previously paid by check would stop making periodic payments in excess of their scheduled amount due. By ceasing to make these prepayments, these borrowers would not pay off their loans as soon as they would have without signing up for EDA and, therefore, incur additional interest costs over the life of their loans. Although actual EDA enrollments have exceeded original estimates, Education lacks data on prepayment patterns after borrowers enroll in the program. Education has not informed borrowers of the cost implications of EDA participation, nor has it systematically informed borrowers of their prepayment options. GAO estimates that Education saved $1.5 million in administrative costs in fiscal year 2001 because it did not have to mail bills to EDA borrowers.
DOD faces five key challenges that significantly affect the department’s ability to accomplish its mission—specifically the need for DOD to (1) rebalance forces and rebuild readiness in an evolving global security environment; (2) mitigate threats in cyberspace and expand cyber capabilities; (3) control the escalating costs of programs, such as certain weapon systems acquisitions and military health care, and manage its finances; (4) strategically manage its human capital; and (5) achieve greater efficiencies in defense business operations. DOD has demonstrated progress addressing each of these challenges, but significant work remains. The military services are generally smaller and less combat ready today than they have been in many years, and each military service has been forced to cut critical needs in areas such as training, maintenance, and modernization due to budget constraints, according to DOD. Officials said that the result of the current state of readiness is that military forces are not strong enough to protect vital U.S. national security interests from worldwide threats. After more than a decade combatting violent extremists and conducting contingency operations in Afghanistan, Iraq, and most recently Syria, DOD has prioritized the rebalancing of its forces in recent budget requests to build and sustain the capabilities necessary to prevail across a full range of potential contingencies. However, DOD has acknowledged that unrelenting demands from geographic commanders for particular types of forces are disrupting manning, training, and equipping cycles. As a result, the military departments remain hard pressed to sustain meeting high levels of operational demands while concurrently rebuilding readiness. See table 1 for a summary of readiness challenges faced by the military services. DOD has sought to rebalance and rebuild the readiness of its forces while also meeting emerging geopolitical challenges that threaten U.S. national security interests and regional stability, such as Russian aggression in Europe and North Korea’s provocative threats in the Asia-Pacific. However, the demand for military forces has created significant gaps in training and maintenance and reduced the margin of error in responding to a shifting security environment. To execute the defense strategy, DOD must balance the risks and costs of preparing for current conflicts and traditional threats with the need to modernize its capabilities and adapt for future ones. As DOD rebuilds and modernizes its forces, it must make difficult affordability choices related to upgrading its aging equipment and weapon systems, while simultaneously sustaining legacy systems and force structure that are needed to meet current operational needs. Our work has found that DOD must address several key challenges to rebalance and modernize the capabilities of U.S. military forces, and rebuild these forces to an even higher level of readiness. DOD has taken some positive steps to rebalance and rebuild the capabilities of its forces to provide a responsive and versatile military that can meet global needs across the full spectrum of operations. For example, DOD increased and maintained the size of its special operations forces—who are specially organized, trained, and equipped to conduct operations in hostile or politically sensitive environments—even as the department reduced the size of conventional military forces. DOD also invested resources to organize and train conventional military forces to be ready to respond to intense and stressful combat operations in the future, and also engage with and strengthen partner nation security forces to meet current operational needs. DOD further pursued modernization plans, including the “third offset strategy,” a set of initiatives aimed at strengthening the military services’ competitive edge, maintaining DOD’s capabilities, and offsetting the technological advances of U.S. foes. For example, DOD and the Air Force have prioritized investing in the stealthy, fifth-generation fighter capability by modernizing the F-22 and buying the F-35 aircraft. DOD, along with the Department of Energy (DOE), is also undertaking an extensive, multifaceted, and costly effort to sustain and modernize the nation’s nuclear weapon stockpile, infrastructure, delivery systems, and nuclear command and control systems that DOD and DOE estimate will cost around $320 billion over the next decade. DOD has also stated that rebuilding readiness is the department’s top priority, and the military services have plans underway to rebuild the readiness for portions of their respective forces. DOD has made progress in these areas, but substantial work remains for DOD in rebalancing its forces and rebuilding readiness. The following sections identify our assessment of remaining work, including additional actions that DOD should take to make further progress. Rebalancing forces: DOD has not fully assessed whether opportunities exist to meet the demands of its geographic commanders and minimize the negative readiness impacts of operational deployments on certain forces. For example, despite an increase in resources and a sustained high deployment level for special operations forces, DOD has not taken steps to examine whether additional opportunities exist to reduce the high demand on these forces by sharing some of their responsibilities with conventional forces. We reported in July 2015 that between fiscal years 2001 and 2014, the average number of special operations personnel deployed increased by nearly 150 percent, and that DOD expected this high pace of deployments to continue (see fig. 2). However, we reported that DOD had not recently evaluated whether some activities conducted by special operations forces could be conducted by conventional forces. In 2003, DOD determined that there were opportunities to share the burden between special operations and conventional forces, including for certain counterdrug missions and foreign conventional force training. In our July 2015 report, we found that special operations forces have continued to perform some activities that could be conducted by conventional forces, such as noncombatant evacuation missions. We also identified areas where DOD and the military services have not determined whether plans to rebalance the capabilities of U.S. military forces will meet the needs of global commanders. For example, we reported in September 2016 that the Air Force has not comprehensively reassessed the assumptions underlying the annual training requirements for its combat aircrews since 2012. We raised questions as to whether the assumptions used by the Air Force about the total annual live-fly sortie requirements by aircraft, the criteria for designating aircrews as experienced or inexperienced, and the mix between live and simulator training account for current and emerging training needs. In August 2015, we also reported that while the Army and the Marine Corps have been able to fill requests for advise and assist missions in Afghanistan (i.e. missions intended to engage with and strengthen partner nation security forces), their approaches affected the overall readiness levels of affected units. For example, we found that staffing advisor teams required brigades to deploy a significant portion of their leadership and expertise for the advisor mission resulting in a degradation of brigade readiness. Recognizing that the advising team structure can negatively impact readiness, the Army announced in February 2017 that it was creating Security Force Assistance Brigades in fiscal year 2018 to minimize the overall readiness impact to the service. However, it is unclear if this initiative will provide combatant commanders with the capabilities needed to accomplish their missions while minimizing the readiness impact on the Army. Weapon systems modernization: DOD expects to invest $951 billion through fiscal year 2021 to help research, develop, test, evaluate and procure modern technology and capabilities for the military, including more than $14 billion annually over the next decade to procure the F-35 aircraft. DOD also plans to recapitalize the three legs of the nuclear triad, among other modernization investments. Nuclear delivery systems are aging, including the bombs and warheads they carry, and in some cases are being deployed long beyond their intended service lives. For example, the Minuteman III ballistic missile, which was first deployed in 1970, is expected to stay in service through 2030 through successive modernization efforts. However, DOD has not ensured that decision makers have complete and accurate budget and cost information to make well-informed decisions on investments in weapon system upgrades and new technologies, or developed plans to address potential risks to certain modernization initiatives. We reported in September 2014 that it is unclear whether DOD’s operating and support cost estimates for the F-35 program, DOD’s most expensive weapon system, reflect the most likely costs that the program will incur. With operating and support estimates totaling around $1 trillion over a 56-year life cycle, we found that although the estimates were comprehensive, weaknesses existed with respect to a few of the assumptions, such as spare part replacement rates and depot maintenance, and that the estimates did not include necessary analyses that would make them reliable. Furthermore, in April 2016 we reported that DOD had not developed credible and accurate cost estimates associated with the F-35’s central logistics system, the Autonomic Logistics Information System (ALIS). ALIS, a complex system supporting operations, mission planning, supply chain management, maintenance, and other processes, has estimated costs of approximately $16.7 billion over its 56-year life cycle. However, we found that the estimate is not fully credible and complete since DOD has not performed uncertainty and sensitivity analyses as part of its cost- estimating process. Similarly, in December 2015 we reported that, when reporting these costs to Congress, DOD had not thoroughly documented the methodologies and comparative information for its nuclear modernization cost estimates, such as the 22-percent increase in the intercontinental ballistic missile estimate, a difference of approximately $2.5 billion. We also reported in August 2016 that DOD and the Air Force did not have complete and quality information on the full implications of the divestment of the A-10 aircraft, including gaps that could be created by divestment and mitigation options. DOD and the Air Force have prioritized investing in the next generation of multirole fighter aircraft while placing a lower priority on older, less capable aircraft such as the A-10, and the Air Force plans to retire its legacy aircraft, such as the F-16, in the coming years (see fig. 3). The Air Force is taking a number of steps to try to mitigate any potential negative impacts from its proposed A-10 divestments, but it has not established clear requirements for the missions the A-10 performs, and in the absence of these requirements has not fully identified the capacity or capability gaps that could result from its divestment. For example, experts convened by the Air Force in 2015 concluded that A-10 divestiture creates a capability gap since the Air Force is losing a high-capacity and cost-efficient ability to kill armor, moving, and close-proximity targets in poor weather conditions. We therefore recommended that the Air Force develop quality information to inform its decision before again proposing divestment. DOD’s fiscal year 2018 budget request appears to align with this recommendation. The request fully funds the A-10 fleet and highlights this action as a key component of DOD’s efforts to address “force structure holes.” The request further notes that the Air Force is assessing a long-term strategy for the A-10 fleet. Rebuilding readiness: DOD has reported that the military services face significant readiness challenges in protecting U.S. national security interests. According to DOD, budget cuts have led to undermanned units, diminished ammunition stockpiles, a broad lack of training, and equipment and facilities that are out of date or not properly maintained. As a result, the department faces persistently low readiness levels—with some forces at historically low levels—and has identified readiness rebuilding as a top priority. However, we reported in September 2016 that the department’s readiness rebuilding efforts are at risk without a comprehensive plan that includes critical planning elements, such as comprehensive strategies and resource levels needed to achieve identified goals, and an approach for measuring progress. Specifically, we found that the military services’ plans to address declines in readiness and capacity across the force do not contain key strategic planning elements, which would help position the military services to meet their readiness goals and support their rebuilding efforts. In addition, although DOD and the military services track readiness trends, the military services have not consistently established metrics or developed a method to evaluate progress in attaining readiness recovery goals. We also reported in May 2016 that the Navy’s efforts to rebuild readiness and achieve employability and sustainability goals for the Navy’s various classes of ships were at risk. To meet operational demands over the past decade, the Navy has increased ship deployment lengths and has reduced or deferred ship maintenance. These decisions have reduced the predictability of ship deployments for sailors and for the ship repair industrial base. They have also resulted in declining ship conditions across the fleet and a worsening trend in overall ship readiness, and have increased the amount of time that ships require to complete maintenance in the shipyards. For example, we reported that the number of casualty reports—incidents of degraded or out-of-service equipment—nearly doubled for both U.S. homeported ships and overseas homeported ships from January 2009 through July 2014 (see fig. 4). Increased maintenance periods, in turn, compress the time during which ships are available for training and operations, referred to as “employability.” To address these issues, the Navy began implementing a revised operational schedule in November 2014, which is intended to maximize employability while preserving maintenance and training, and restore operational and personnel tempos to acceptable levels. However, our analysis of Navy data for fiscal years 2011 through 2014 shows that prior to the implementation of the revised schedule, the majority of maintenance availabilities completed by both the public and private shipyards took more time than scheduled, thereby reducing the time during which ships were available for training and operations. As of May 2015, only a small portion of the fleet had entered the revised maintenance schedule, and as a result it is too early to assess its overall effectiveness. However, the first three aircraft carriers to enter the revised schedule have not completed maintenance tasks on time, a benchmark that is crucial to meeting the Navy’s employability goals. Further, any changes to assumptions the Navy made in formulating the revised schedule, including those related to available funding levels, force structure, or deployments, will further place achieving its goals at risk. By determining the most appropriate forces and training to meet the demands of its combatant commanders, ensuring policy makers have information to make well-informed weapon system modernization choices, and developing a comprehensive plan to rebuild readiness with methods for evaluating outcomes, DOD and decision makers would be better positioned to evaluate whether U.S. military forces have the capacity and capabilities to prevail across a full range of potential contingencies. Since 2011, we have directed 39 recommendations to DOD in this area, of which 35 remain open, including 5 priority recommendations. Table 2 highlights key actions DOD should take to help address the challenges in rebalancing forces, modernizing weapon systems, and rebuilding readiness in an evolving global security environment. Cyber threats to U.S. national and economic security are increasing in frequency, scale, sophistication, and severity of impact. A 2016 Federal Information Security Management Act report noted that more than 30,000 data security incidents compromised federal information systems during fiscal year 2016—16 of which were categorized as major incidents that needed to be reported to Congress. In July 2015, a major cyber breach was reported at the Office of Personnel Management, which affected at least 21.5 million individuals and resulted in the release of personally identifiable information on federal contractors and employees, including those at DOD. Recognizing this strategic challenge, in February 2016 the Director of National Intelligence identified cyber threats as first among strategic threats to the United States, surpassing terrorism. DOD has become increasingly reliant on the Internet and other networks, which are central to the department’s operations and enable essential services including logistics, budgeting, personnel, and policymaking. We have reported that the security of the federal government’s cyber systems and data is vital to public confidence and to the nation’s safety, prosperity, and well-being. However, the vulnerability of DOD’s networks, along with those across the federal government has grown, and hostile actors have used cyberspace as an asymmetric capability to strike the U.S. homeland and interests. DOD has acknowledged the need to coordinate cyber efforts and clarify roles and responsibilities for addressing domestic cyber incidents with the Department of Homeland Security—which has primary responsibility for the protection of critical cyber infrastructure within the United States—and has prioritized investments to expand its current cyber capabilities. Our work has found that DOD must address weaknesses in (1) its planning for the continuity of operations in a degraded cyber environment, (2) the protection of classified information and systems from insider threats, and (3) the visibility and oversight of its capabilities that could be used during a cyber incident. DOD has made progress in developing cyber capabilities that are needed to simultaneously defend its networks, systems, and information; protect the nation from cyber attacks of significant consequence; and work with other departments and branches of the federal government to address cyber-related issues. In April 2015, DOD issued a cyber strategy to guide the development of the department’s cyber forces and strengthen the department’s cyber defense and deterrence posture. A central aim of DOD’s cyber strategy is to set specific goals and objectives to guide the development of the Cyber Mission Force and of DOD’s wider cyber workforce to protect and defend U.S. national interests. DOD has also taken steps to improve its ability to provide support for state and local civil authorities to improve cybersecurity and contingency planning in response to a hostile attack on cyber infrastructure. For example, from 2013 through 2015, DOD conducted or participated in nine exercises that were designed to test cybersecurity policies for supporting civil authorities or to test the response to simulated attacks on cyber infrastructure owned by civil authorities. In addition, in response to our July 2015 report, DOD issued a memorandum directing the services and other defense agencies to develop plans identifying the goals, milestones, and resources needed to identify, register, and implement cybersecurity controls on DOD facility industrial control systems, which are computer-controlled systems that monitor and operate utilities infrastructure. DOD has made progress in these areas, but substantial work remains for DOD to mitigate threats to cybersecurity and expand its cyber capabilities. We discuss below our assessment of remaining work, including additional actions that DOD should take to make further progress. Continuity of operations in a degraded cyber environment: DOD, along with the rest of the federal government, needs to take additional steps to protect its critical cyber capabilities and ensure continuity of operations. In February 2017, we reported that federal agencies, including DOD, have not fully developed and implemented complete strategies, policies, plans, and procedures for responding to cyber incidents and effectively overseeing response activities. Our work assessing DOD’s cyber efforts found that DOD has not fully conducted the planning needed to maintain continuity of operations in a degraded cyber environment, which could affect DOD’s ability to perform essential functions—such as combat operations and homeland defense. DOD has stated that expenditures on cyber capabilities have begun to provide a measurable return on investment and that the department is interdicting more threats than ever before, but the department also acknowledges that unauthorized intrusions of its networks still occur. The evolving array of cyber threats, along with the continuing threat of nuclear attacks and natural disasters, has underscored the need for DOD to further strengthen its planning for continuity of operations and the collection of relevant data to inform planning. Doing so would help ensure that the department can continue to perform its mission-essential functions even if its information systems and networks become unavailable, infiltrated, or destroyed, which can occur due to natural disasters (e.g., hurricanes), system or infrastructure failures, and by intentional or unintentional human-caused incidents. Since April 2014, we have been calling upon DOD to revise its guidance on continuity plans to describe the priority of continuity planning for cyber events or to provide additional guidance to DOD components on how to include accurate and complete data on information systems and networks necessary to perform mission-essential functions in continuity plans. DOD also needs to provide its components with tools—such as guidance, training, and exercises—that both emphasize the need to conduct continuity exercises in a degraded cyber environment and assist DOD components in developing and practicing effective responses during continuity exercises. We further reported that DOD had not evaluated its approach to assigning tasks among its components that assures continuity of mission-essential functions or evaluated the readiness of its components to respond to an incident that degrades the cyber environment. In July 2015, we reported that DOD does not have comprehensive and accurate utility disruption data. Specifically, DOD’s collection and reporting of utility disruption data is not comprehensive and contains inaccuracies because not all types and instances of utility disruptions have been reported and because there are inaccuracies in reporting of disruptions’ duration and cost. Further, according to officials, DOD installations are not reporting all disruptions that meet the DOD criteria of commercial utility service disruptions lasting 8 hours or longer. This is likely due, in part, to military service guidance that differs from instructions for DOD’s data collection template. As of March 2016, DOD has implemented steps to improve its data collection and validation process, but additional work remains. Protect classified information and systems from insider threat: Since 2010, the United States has suffered grave damage to national security and an increased risk to the lives of U.S. citizens due to the unauthorized disclosures of classified information by individuals with authorized access to defense information systems. In June 2015, we reported that DOD had taken action to implement minimum standards established by the National Insider Threat Task Force. For example, the seven DOD components we assessed had begun to provide insider threat awareness training to all personnel with security clearances. However, DOD had not addressed all tasks associated with the minimum standard; had not analyzed gaps or incorporated risk assessments into the program; and had not consistently incorporated all of the key elements associated with an insider threat framework that we developed synthesizing information from a White House report, an executive order, DOD guidance and reports, national security systems guidance, and leading practices recommended by the National Insider Threat Task Force (see fig. 5). Visibility of cyber capabilities: We reported in September 2016 that DOD may be limited in responding to a cyber attack in a timely manner because it does not have visibility into all of its domestic capabilities that could be used in the event of a cyber incident. Specifically, DOD has not maintained a database that would allow the department to fully and quickly identify existing cyber capabilities of all National Guard cyber units, as required by law. We reported that National Guard Bureau officials had identified two systems that the bureau traditionally uses to identify National Guard capabilities—the Defense Readiness Reporting System and the Joint Information Exchange Environment —but acknowledged that neither of these systems could be used to fully or quickly identify National Guard cyber capabilities. We found examples of three types of cyber capabilities in National Guard units—communications directorates, computer network defense teams, and cyber units—that DOD may be unaware of if requested to support civil authorities during a cyber incident. This is because some National Guard capabilities were established to support state and local governments and do not have a federal mission and, therefore, would not be reported in the Defense Readiness Reporting System. Further, the amount of time required to query other systems may not be feasible during a cyber incident, which could impede DOD from using the full range of its capabilities. By improving the planning for cyber operations and the visibility and oversight of department-wide cyber capabilities, DOD would be better positioned to ensure that it maintains critical mission continuity; safeguards classified information and systems; and quickly responds to a cyber incident. Since 2011, we have directed 33 recommendations to DOD in unclassified and sensitive but unclassified reports, of which 14 remain open, including 5 priority recommendations. Table 3 highlights key actions DOD should take to help address challenges it faces in mitigating threats to cyberspace and in expanding cyber capabilities. DOD’s $580 billion fiscal year 2016 budget accounts for nearly half of the federal government’s discretionary spending, and DOD’s costs are growing. For example, DOD plans to invest $574 billion in future funding to develop and acquire major acquisition programs, and the department’s annual military health care costs are expected to increase from about $60 billion in fiscal year 2017 to about $70 billion by fiscal year 2028. DOD also maintains a substantial inventory of infrastructure, owning over 70 percent of the federal government’s physical assets, with a reported replacement value of about $880 billion. Senior leaders have acknowledged the need for the department to effectively manage the resources entrusted to it. However, DOD is one of the few federal agencies that cannot accurately account for and report its spending or assets. Like the rest of the federal government, DOD’s budget has been affected by policies that are intended to correct the imbalance between spending and revenue. For example, the Budget Control Act of 2011 imposed an $800 billion reduction in planned spending for DOD from fiscal years 2012 through 2021. Given these constrained budgetary resources and DOD’s recognition that there are opportunities to be more efficient in the department’s operations, the department has undertaken a series of reform initiatives to control costs for programs that make up a significant portion of DOD’s budget and improve DOD’s financial management operations. At the same time, DOD is pursuing new technologies and is investing significant resources to develop and procure a portfolio of 78 major defense acquisition programs. However, DOD has experienced cost and schedule overruns that expose its procurement budgets to unnecessary risk. In addition, DOD’s military health system must ensure access to quality health care for service members and their families, but it has experienced a more than two-fold increase in costs in fiscal years 2001 through 2017, and DOD has likely underestimated its improper payments for health care services. Further, total military health system costs are expected to increase from about $60 billion in fiscal year 2017 to a projected $70 billion annually by fiscal year 2028 (see fig. 6). DOD also manages installations worldwide to support military readiness, consisting of about 562,000 facilities, but has maintained excess infrastructure relative to the department’s force structure needs. DOD has expressed a commitment to holding itself accountable for the funding it receives and is taking actions to allow for an annual financial statement audit, but it continues to remain one of the few federal entities that cannot demonstrate an ability to accurately account for and reliably report on its spending or assets. Our work has found that DOD must address these and other weaknesses to control costs as it faces a period of constrained budgetary resources and fiscal uncertainty. Since 2010, DOD has implemented a series of Better Buying Power initiatives that outline steps the department is taking across its weapon system acquisition portfolio to reduce cost and schedule overruns and achieve better capability and performance results for the warfighter. These initiatives include setting and enforcing affordability constraints, implementing “should cost” management to control contract costs, and eliminating redundancies within portfolios, among others. In implementing these initiatives, DOD has achieved better acquisition outcomes in some programs, including collectively reducing the total cost estimate for 14 programs that began systems development during this period by over $580 million since their first full estimates. DOD has also taken steps to modernize the military health system and control health care costs. After decades of incremental alterations to its health care programs, DOD created the Defense Health Agency (DHA) in 2013 to provide administrative governance for a more cost-effective and integrated military health system. DHA has implemented various initiatives, such as consolidating shared medical services; eliminating redundant processes; coordinating resources; and matching personnel, infrastructure, and funding to missions and populations in demand. DOD has also taken steps to control costs for its support infrastructure. For example, DOD consolidated some of its base support services and reported a net reduction of 7.7 million square feet of support infrastructure in fiscal year 2013, which represented about 75 percent of the federal government’s total reduction under a government-wide initiative. DOD also established financial improvement and audit readiness guidance, implemented training programs to help build a skilled financial management workforce, and developed corrective action plans to track the remediation of audit issues. However, DOD continues to identify the need for sufficient numbers of qualified and experienced personnel as a challenge to achieving its goals of financial improvement and audit readiness. DOD has made progress in these areas, but substantial work remains for DOD to further control costs and manage its finances. The following sections identify our assessment of remaining work, including additional actions that DOD should take to make further progress. Weapon system acquisition: This portfolio of 78 major defense acquisition programs will require roughly a quarter of DOD’s development and procurement funding over the next 5 years. Currently, DOD’s total investment in these major defense acquisition programs is estimated at $1.5 trillion, of which $574 billion is for future funding. However, over the past year, we reported that a majority of DOD’s 78 major programs (46 out of the 78 programs) had experienced a cost increase, as shown in figure 7. We have reported that DOD could achieve significant cost savings by consistently employing acquisition best practices in its weapon systems programs, such as early systems engineering, analyzing alternatives, managing changes in system requirements, and applying prototyping early in development testing. While DOD has made progress in decreasing the amount of cost growth realized in its portfolio of major acquisition programs, it has not uniformly implemented acquisition best practices and reforms across the portfolio, which has resulted in some programs that realized significant cost growth and delays in delivering needed capabilities. We have also found that new acquisition programs started each year at DOD fulfill only some of the best practices intended to achieve a level of knowledge that would demonstrate that the program is capable of meeting its performance requirements and cost and schedule commitments. Specifically, in March 2017 we found that most of the programs we assessed were not fully following a knowledge-based acquisition approach. Further, only one of the four programs that began or planned to begin system development during the fiscal year 2016 assessment period demonstrated a total match between resources and requirements (see fig. 8). The remaining 41 programs we reviewed implemented knowledge-based leading practices to varying degrees. We further reported that some programs have progressed through the acquisition cycle without the appropriate levels of knowledge at key junctions, which is of particular concern for programs that entered the system development phase before satisfying knowledge-based best practices. For example, DOD faces technical, design, and production challenges for some of its large programs, such as the CVN 78 aircraft carrier, which has experienced an almost 23-percent increase in program costs since construction was authorized in fiscal year 2008—from $10.5 billion to $12.9 billion. In an effort to meet required installation dates aboard the CVN 78, the Navy elected to produce some of these systems prior to demonstrating their maturity, which introduces the risk of late and costly design changes and rework. In addition, progress in constructing the CVN 78 was overshadowed by inefficient out-of-sequence work, driven largely by material shortfalls, engineering challenges, and delays in developing and installing critical technology systems. Military health care: Military health care costs constituted about 6 percent of DOD’s total budget in fiscal years 1994 and 2000, but have grown considerably by about 217 percent in fiscal years 2000 and 2017. As noted above, DOD created DHA in 2013 to create a more cost- effective and integrated military health system. DHA provides administrative support for the services’ respective medical programs and combines common “shared” services in certain areas to achieve cost savings. However, DOD has not established key processes for monitoring improper payments or fully implemented DHA reforms, which include limited efforts to modernize the military health system and reduce health care costs. In February 2015, we reported that DOD has not developed a comprehensive methodology to monitor improper payments to control costs in the military health care plan (i.e., TRICARE). We reported that in its fiscal year 2015 agency financial report, DOD reported spending about $19.7 billion on the purchased care option of TRICARE, yet reported improper payments of only about $158 million, an error rate of 0.8 percent, compared with Medicare’s error rate of 12 percent. This considerable disparity raises questions about the accuracy of the methodology for calculating TRICARE improper payments. In our February 2015 report, we found that TRICARE’s methodology for estimating improper payments for fiscal year 2013 was less comprehensive than the measurement methodology used to estimate Medicare improper payments because TRICARE’s methodology does not comprehensively capture errors that occur at the provider level or errors that can only be identified through an examination of underlying medical record documentation. As a result, for fiscal year 2013, there were significant differences in the improper payment rate for TRICARE and Medicare (see fig. 9). With respect to DHA, we reported in September 2015 (nearly 2 years after DHA’s creation) that DOD had not fully implemented key processes—including developing personnel requirements, identifying cost savings, and establishing performance measures—to monitor the department’s implementation of DHA reforms. More specifically, we reported in 2013 that DOD had not developed DHA staffing requirements to monitor the effect of possible personnel growth and the composition of its workforce. In the absence of such requirements, the military services questioned the accuracy of the estimated $46.5 million in annual personnel savings on which DOD had, in part, based its decision to establish DHA. We further reported that DOD’s cost estimates for DHA were unclear and missing key details. For example, although DOD had developed a business case analysis approach to help achieve cost savings and has applied this approach to eight of its ten shared service areas, it has not developed these analyses for the remaining two areas of shared services—(1) public health and (2) medical education and training. We also reported that DOD did not have comprehensive performance measures and quantifiable targets to assess progress in achieving DHA’s cost-savings goals, and that opportunities existed to reduce health care costs by millions of dollars by completing, implementing, and monitoring comprehensive plans for each of its approved health care initiatives. For example, in 2012, we reported that DOD had calculated that it would save $300 million by meeting one of its health care initiative cost growth targets related to clinical and business practices, such as purchased care reimbursements. Defense support infrastructure: DOD has not effectively and efficiently managed its portfolio of facilities or controlled the costs of maintaining excess support infrastructure relative to its force structure needs. DOD’s most recent Base Realignment and Closure round occurred in 2005 and, according to DOD officials, was the largest to date. DOD officials noted in May 2017 some reasons for cost overruns associated with the 2005 Base Realignment and Closure round, including, among others, higher costs for military construction materials and efforts to align DOD's infrastructure with military strategy. Our work on DOD’s implementation of the 2005 Base Realignment and Closure round identified weaknesses that hampered DOD’s ability to execute its responsibilities related to the cost and savings estimation process and efforts to measure performance. In addition, we reported that DOD had underestimated specific infrastructure requirements in the model that it used to estimate expected costs and savings from implementing closures and realignments under the Base Realignment and Closure process. Specifically, DOD did not fully identify requirements for military construction, relocating military personnel and equipment, and information technology when entering these costs into its model, which resulted in inaccurate cost estimates. We reported that the primary reason costs increased for the 2005 Base Realignment and Closure round was higher-than-anticipated military construction costs—an increase of 86 percent from $13.2 billion originally estimated to $24.5 billion after implementation ended in 2011. In addition, DOD can improve the accuracy and completeness of its facilities utilization and leasing information to more effectively manage the department’s portfolio of facilities and control the costs of maintaining excess support infrastructure relative to its force structure needs. In February 2017 we reported that DOD had utilization data on about 97 percent of its facilities as of September 2015—the most recent data available—increasing from 53 percent as of September 2013. However, we reported that, of the facilities that have a utilization rating of 100, 24 percent had either no inspection date or had most recently been inspected prior to September 30, 1999, which calls into question the accuracy of these data. We also reported in March 2016 that DOD did not always assess the use of available space resulting from planned force reductions at its installations or systematically identified the availability of underutilized space prior to entering into lease agreements. Financial management: Long-standing internal control deficiencies have adversely affected the economy, efficiency, and effectiveness of DOD’s operations. The effects of DOD’s financial management problems extend beyond financial reporting and negatively affect DOD’s ability to manage the department and make sound decisions regarding its mission and operations. Among other issues, DOD’s financial management problems have contributed to (1) inconsistent and sometimes unreliable reports to Congress on weapon system operating and support costs, limiting the visibility that Congress needs to effectively oversee weapon system programs and (2) an impaired ability to make cost-effective choices, such as deciding whether to outsource specific activities or how to improve efficiency through technology. In January 2017, we reported that DOD’s financial management problems have continued to significantly impede our ability to render an opinion on the federal government’s consolidated financial statements and have prevented DOD from producing auditable department-wide financial statements. For example, DOD’s reported inventory, buildings, and other property and equipment represent 75 percent of the federal government’s reported physical assets as of September 30, 2016. However, DOD cannot demonstrate that it accurately and completely accounted for all of these assets, including their location and condition. DOD also reported fiscal year 2015 procurement obligations that represent over 60 percent of the federal government’s equity. However, DOD lacks effective systems, processes, and controls related to its procurement activity, including contract pay. We also identified several long-standing and interrelated deficiencies that have hindered DOD’s financial management activities. For example, DOD leadership has not assured that DOD’s components adhere to audit readiness plans and guidance. As a result, components lack the necessary leadership, processes, systems, and controls to improve financial management operations and audit readiness. DOD also has not assured that the military services enhance their policies and procedures for developing audit corrective action plans and improve processes for identifying, tracking, and remediating financial management-related audit findings and recommendations. We further reported that DOD needs to continue to (1) develop and deploy enterprise resource planning systems as a critical component of it financial improvement and audit readiness strategy and (2) design manual work-arounds for older systems to satisfy audit requirements and improve data used for day-to-day decision making. We also reported in February 2017 that all three of the independent public accountants (IPA) contracted to audit the fiscal year 2015 Schedules of Budgetary Activity (Budgetary Schedule) of the Army, the Air Force, and the Navy issued disclaimers, meaning that the IPAs were unable to complete their work or issue an opinion because they lacked sufficient evidence to support the amounts presented. These IPAs also identified material weaknesses in internal control and collectively issued over 700 findings and recommendations. These weaknesses included the military services’ inability to, among other things, reasonably assure that the Budgetary Schedules reflected all of the relevant financial transactions that occurred and that documentation was available to support such transactions. As a result of these financial management issues, DOD expects the department-wide financial statement audit planned for fiscal year 2018 to result in significant audit findings and a disclaimer of opinion. In addition, DOD reported that it anticipates receiving disclaimers of opinion on its full financial statements for several years, but emphasized that being subject to audit will help the department make progress. By consistently applying weapon system acquisition best practices, managing improper payments associated with military health care, comprehensively implementing military health care reforms, more effectively managing its portfolio of support infrastructure, and addressing long-standing financial management deficiencies, DOD would be better positioned to identify opportunities to direct its resources to its highest priorities. Since 2011, we have directed 79 recommendations to DOD in this area, of which 72 remain open, including 52 priority recommendations. Table 4 highlights key actions DOD should take to help address the challenges it faces in controlling costs and managing finances. DOD is one of the nation’s largest employers, managing a total workforce of about 2.1 million active-duty and reserve military personnel and approximately 769,000 civilian personnel. DOD estimates that it will spend nearly $180 billion in fiscal year 2017 on pay and benefits for military personnel and about $70 billion for its civilian employees. Taken together, funding for military and civilian pay and benefits represented nearly 50 percent of DOD’s budget in fiscal year 2016 (see fig. 10). DOD is also supported by about 561,000 contractor personnel, who help maintain weapon systems; support base operations; and provide information technology, management, and administrative support, among other responsibilities. DOD estimates that it spent about $115 billion on its contractor workforce in fiscal year 2015, although we have raised questions regarding the reliability of the department’s information on this workforce. As with other large organizations, DOD must compete for talent in the 21st Century, and recruit, develop, promote, and retain a skilled and diverse workforce of service members and civilians. However, DOD, as other federal agencies, faces mission-critical skill gaps that pose a risk to national security and impede the department from cost-effectively serving the public and achieving results. For example, the need for some skill sets, such as cyber, intelligence, maintenance, engineering, disability evaluation, and auditing has increased while the need for other skill sets may decrease over time. Moreover, the changing nature of federal work and a potential wave of employee retirements could produce gaps in leadership and institutional knowledge, which may aggravate the problems created by existing skill gaps. Current budget and long-term fiscal pressures on the department only increase the importance of strategically managing human capital. DOD has recognized that efficient human capital management is imperative because personnel costs will likely drive many of the department’s future strategic decisions, and that it must incur compensation costs that are effective in helping it achieve its recruiting and retention goals. Our work has found that DOD must address several weaknesses to determine its appropriate workforce mix and costs, address critical skill gaps, and develop an effective military compensation strategy as it attempts to strategically manage its military and civilian workforces and contracted support. DOD has begun to implement a significant phase of its civilian workforce performance management system, called “New Beginnings,” which aims to create a department-wide civilian workforce performance management process, and has taken steps to develop better information and data about the size, capabilities, and skills possessed and needed within its total workforce. For example, in June 2014 DOD incorporated some results-oriented performance measures into its civilian workforce plan, and in June 2016 issued guidance that established a common structure for managing and evaluating workforce competency gaps for developing its future strategic workforce plans. In an effort to address critical skill gaps in its cybersecurity workforce, DOD updated its cybersecurity workforce plan in 2014 to include a description of the strategies it plans to employ to address gaps in human capital approaches and critical skills and competencies. DOD has also taken steps to evaluate the effectiveness of specific pay, retirement, health care, and quality of life benefits included in military compensation, and proposed a range of options to reduce military compensation costs, such as limiting the amount of the annual pay raise and implementing increases in enrollment fees, deductibles, and co-pays for TRICARE participants. DOD has also begun a study to determine the appropriate mix of pay and benefits to use in making comparisons with private-sector compensation, and is developing a more comprehensive methodology for making these comparisons. DOD has made progress in these areas, but substantial work remains for the department in managing its human capital. The following sections identify our assessment of remaining work, including additional actions that DOD could take to make further progress. Workforce mix and costs: Since 2004 we have reported on challenges DOD faces in developing a strategic workforce plan that would enable the department to make efficient and cost-effective human capital decisions. For example, we reported that DOD had not assessed the appropriate mix of military, civilian, and contractors to prioritize its investments and improve its overall workforce. DOD noted in its strategic workforce plans that assessing this mix is a significant challenge, and that it planned to complete a workforce mix assessment in a future plan. However, the requirement to develop and submit its biennial strategic workforce plan was repealed in the National Defense Authorization Act for Fiscal Year 2017 and not replaced with another legislative requirement. DOD officials have stated that the department is engaged in internal workforce planning efforts to better align its workforce mix and costs. We also reported in September 2013 that DOD had opportunities to improve its methodology for estimating workforce costs. For example, DOD has not followed our leading practices for cost estimation and it likely underestimated certain costs, such as those for training, which prevents the department from making cost-effective comparisons and decisions regarding the use of its military, civilian, and contractor workforces. We further reported in December 2015 that DOD has not fully developed and implemented a plan to achieve savings for its civilian and contractor workforces, consistent with congressional direction. We further found that civilian full-time equivalents by themselves may not be reliable measures of the cost of the civilian personnel workforce. For example, our analysis shows that from fiscal years 2012 through 2016, civilian full-time equivalents declined by 3.3 percent, but civilian personnel costs declined by only 0.9 percent, adjusted for inflation. As a result, reductions to civilian full-time equivalents may not achieve commensurate savings, and larger full-time equivalent reductions may be required in order for DOD to meet mandated savings requirements for the civilian and contractor workforces. Critical skill gaps: DOD has not taken sufficient actions to strengthen the management of certain mission-critical workforces. For example, in December 2015 we reported that DOD had developed a five-phased process, including surveys of its employees, to assess the skills of its acquisition workforce and to identify and close skill gaps. DOD completed competency assessments for 12 of its 13 career fields and is developing new training classes to address some skill gaps. However, DOD has not determined the extent to which workforce skill gaps identified in initial career field competency assessments have been addressed and what workforce skill gaps currently exist. Further, DOD has not established time frames for when career fields should conduct another round of competency assessments to assess progress toward addressing previously identified gaps and to identify emerging needs. The department’s November 2016 acquisition workforce strategic plan identified that career field competency assessments should be conducted a minimum of every 5 years, but it is too soon to tell whether DOD will conduct these assessments as recommended in the plan. DOD also has not addressed certain personnel challenges resulting from the increased demand for its unmanned aerial systems. In 2014, we reported that the Air Force did not accurately identify the crew ratios needed to meet requirements for its unmanned aerial systems pilots or establish the effective mix of personnel to satisfy its pilot shortages, including evaluating the use of military enlisted and federal civilian personnel to help address pilot needs. In January 2017, we further reported that the Air Force and the Army had not resolved key challenges in managing these pilots or tailored their human capital strategies to address pilot gaps, to include evaluating the extent to which federal civilians could be used as pilots. Military compensation: Since 2011, we have reported that DOD has not completed the steps necessary to develop a more comprehensive compensation strategy that could improve the ability of the department to recruit and retain a highly qualified force to carry out its mission while minimizing unnecessary costs. DOD has taken some steps to evaluate the effectiveness of specific pay and benefits included in military compensation, as we suggested in March 2011, but has not comprehensively assessed the effectiveness of its mix of pays and benefits and used the results to develop a compensation strategy. For example, the department is implementing changes to the military retirement system that will provide eligible service members who have at least 2 but fewer than 20 years of service when departing the military with a portable retirement benefit. A DOD official stated in January 2017 that the department has also completed a study to review how military compensation compares to private sector compensation, among other efforts. However, as of March 2017, DOD had not completed an assessment of the effectiveness of all types of military pay and benefits, or identified opportunities to achieve long-term cost avoidance by addressing in a compensation strategy the types of compensation that are effective and by not incurring costs for compensation that may not be effective to help it achieve its recruiting and retention goals. For example, in November 2015, we reported that special and incentive pays were not always being used to fill military occupational specialties that were consistently below authorized levels for the Army and the Army National Guard, and that incentives were being used sometimes for military occupational specialties that were consistently above approved levels. We further reported in February 2017 that DOD has not effectively managed special and incentive pays for its active-duty service members—which totaled more than $3.4 billion in fiscal year 2015. In May 2017, DOD officials noted that Army's Career Satisfaction Program is one example of the services using non-monetary pay incentives to improve retention. However, we found that while DOD and the military services have occasionally offered service members non-monetary incentives, they do not routinely assess whether nonmonetary incentives could be used as less costly approaches to addressing retention challenges, and that DOD’s guidance for special and incentive pay does not explicitly incorporate personnel performance into eligibility criteria for retention bonuses as a way to foster top talent and improve program results. We also found that the military services were not consistently applying key principles of effective human capital management to its special and incentive pay programs for three high-skill occupations (nuclear propulsion, aviation, and cybersecurity) that reflect a range of characteristics of such programs and are associated with missions deemed critical by the department. In May 2017, DOD officials stated that DOD applies some human capital principles in its management of military compensation programs, noting that DOD’s review of the programs showed that they met or partially met 98 percent of the criteria for effective human capital management. However, we believe that more fully implementing such principles to include more precisely targeting its bonuses to occupations in critical need, and using these pays to foster its top talent, would help to ensure that DOD’s resources are optimized for the greatest return on investment. By comprehensively assessing its workforce mix and costs, strengthening the management of critical skill gaps, and establishing a cost-effective military compensation strategy, DOD would be better positioned to determine and maintain the most effective and efficient mix of military and civilian personnel and contractor support. Table 5 highlights key actions DOD should take to help address the challenges it faces to strategically manage its human capital. Since 2011, we have directed 67 recommendations to DOD in this area, of which 64 remain open. DOD spends billions of dollars each year acquiring business systems and contractor-provided services that provide fundamental support to the warfighter in the areas of health care; logistics; personnel; and financial management, among other areas. In fiscal year 2014 alone, DOD obligated $85 billion to its three largest types of contractor-provided services: knowledge-based, facility-related, and research and development services. This amount is more than double the amount that DOD obligated to purchase aircraft, land vehicles, and ships. DOD senior leaders have prioritized defense institutional reform, and have emphasized the need to improve business practices and reduce overhead as a means to achieve greater efficiencies and free up resources for higher priorities. However, problems in DOD’s management of the department’s business functions continue to negatively affect the ability of DOD to satisfy its mission. In 2005, we designated DOD’s business transformation efforts— those intended to increase the efficiency and effectiveness of what we identified as DOD’s core business functions—as high risk because DOD did not have integrated planning or sustained oversight of its business processes. We have also designated DOD’s efforts to modernize and consolidate the department’s business systems, contract management, financial management, and weapon systems acquisition as high risk because of planning and leadership challenges. Congressional direction and personnel growth in headquarters organizations have led DOD to pursue several personnel reduction initiatives to achieve efficiencies since 2014. However, DOD has faced obstacles accounting for the resources devoted to its multiple layers of headquarters activities because of complex and overlapping relationships among them, incomplete data, and unclear personnel requirements. Our work has found that DOD must address several weaknesses to successfully implement its business transformation efforts, manage investments to modernize business systems, manage the acquisition of services, and properly size the department’s headquarters organizations to accomplish assigned missions. DOD established new governance forums, issued new plans to guide its business transformation efforts, and established or clarified roles and responsibilities for senior positions related to its business functions. In 2012, DOD established the Defense Business Council as a senior-level governance forum to oversee its core business functions. The Defense Business Council has recently begun conducting high-level performance reviews to assess progress in achieving department-wide goals and objectives in DOD’s Agency Strategic Plan, which is intended to be a department-wide performance plan for assessing progress across DOD’s business areas. The Defense Business Council has also started to identify opportunities to gain efficiencies across DOD’s headquarters offices and defense agencies. For example, in March 2017, senior DOD officials stated that DOD had implemented a new initiative to review how it accounts for costs across its business functions. DOD and the military departments have also established roles and responsibilities for senior business transformation positions, such as Chief Management Officers (CMOs) and Deputy CMOs (DCMOs). DOD has further taken other steps to avoid potential overlap and duplication and gain efficiencies in its business systems investments. For example, DOD established an authoritative data source for defense business system certification funding and improved the data it uses to manage business systems acquisition. Senior DOD leadership also remains committed to addressing its contract management challenges, and since 2015 has made significant progress in addressing operational contract support issues, such as incorporating operational contract support considerations into operational plans. DOD has also established a framework to define its major headquarters activities, a key step needed to track resources for these organizations and identify opportunities to consolidate or eliminate certain positions to achieve the department’s goals to reduce its headquarters resources. DOD has made progress in all of these areas, but substantial work remains to strengthen its business operations and achieve efficiencies. The following sections identify our assessment of remaining work, including additional actions that DOD should take to make further progress. Business transformation: DOD has not conducted effective performance reviews needed to ensure accountability for achieving results for its business transformation initiatives, or established a department-wide performance plan to monitor progress. Although the Office of the DCMO has recently begun to hold performance reviews to assess progress in achieving department-wide strategic goals and objectives, the reviews have not held business function leaders accountable in part because military department performance information was not included in the scope of the reviews. In July 2015, the Office of the DCMO issued the DOD Agency Strategic Plan, which according to DOD is a plan that establishes goals and priorities to manage its major business operations. However, the plan does not identify specific initiatives to improve DOD’s business transformation efforts, identify the systems and processes needed to address business transformation matters, or identify how progress will be assessed. In addition, while the Agency Strategic Plan is intended to apply to the entire department, we reported that the military departments had a limited role in the development of the plan. Further, the military departments have not aligned their respective plans with the Agency Strategic Plan, or used the Agency Strategic Plan to monitor their business functions. Business system modernization: DOD has developed an enterprise architecture—a blueprint for DOD’s business system modernization efforts that is intended to guide and constrain the implementation of business systems; however, the current version is missing important content associated with achieving the department’s goal of using the architecture to guide, constrain, and enable interoperable business systems. In addition, the department has not fully defined and established management controls and plans to more effectively and efficiently manage its business system investments, which totaled approximately $10 billion in fiscal year 2015. DOD officials stated in May 2017 that the department has used its business enterprise architecture for at least the past three investment review cycles to help identify duplicate investments. In addition, officials have provided examples of benefits attributed, at least in part, to the department’s enterprise architecture. For example, according to officials with the Office of the DCMO, two proposed defense business systems were not approved due, in part, to architecture reviews that revealed that the requested capabilities were already available in other systems. In addition, DOD officials stated in May 2017 that the architecture informed a decision to investigate potential duplication and overlap and opportunities to develop shared services among fourth estate and financial management systems. However, the department has not yet demonstrated that it is actively and consistently using such assessments of potential duplication and overlap to eliminate duplicative systems. In January 2017, the department issued a plan to improve the usefulness of its business architecture. However, the department’s effort to complete its federated business architecture remains a work in progress. In addition, DOD needs to take steps to ensure that, among other things, documents submitted as part of the business system investment management process include critical information for conducting all assessments, such as information about cost in relationship to return on investment. We also reported in February 2017 that DOD had not yet established an action plan (or plans) highlighting how it intends to, among other things, improve its business system investment management process or improve its business system acquisition outcomes. Services acquisition: DOD has not fully developed guidance and plans needed to strategically manage its acquisition of services to determine what the department is buying today and what it intends to buy in the future, or provide the Congress with visibility into its planned spending for contracted services. Specifically, we reported in February 2017 that, while DOD issued new guidance in January 2016 for acquiring services, DOD lacks an action plan to enable it to assess progress in achieving its goals of more effectively managing services acquisition, and efforts to identify goals and associated metrics are in the early stages of development. We also reported in February 2016 that, while data on future service acquisitions are generally maintained by DOD program offices, DOD and military department guidance do not require that the data be specifically identified in DOD’s budget forecasts, and DOD’s January 2016 instruction does not clearly identify what level of detail should be collected, leaving DOD at risk of developing inconsistent data between the military departments. Headquarters management: Our body of work on DOD’s headquarters reduction initiatives found that department-wide efforts to improve the efficiency of headquarters organizations and identify related costs savings may not be fully implemented or may not result in meaningful savings. In February 2012, we reported that DOD could recognize cost avoidance and save billions of dollars by reviewing and identifying further opportunities for consolidating or reducing the size of headquarters organizations. In a 2015 review of its six business processes, which included savings opportunities beyond headquarters reductions, the Defense Business Board identified between $62 billion and $84 billion in potential cumulative savings opportunities for fiscal years 2016 through 2020 that could be achieved through civilian personnel attrition and retirements to occur without replacements over the next 5 years and by improving core processes such as reducing excessive organizational layers, among other factors. However, we reported that DOD has not had a clear or accurate accounting of headquarters’ resources, including contractors, to use as a starting point to track headquarters reduction initiatives. We further reported that DOD has not periodically reviewed the size and structure of these organizations, such as the geographic combatant commands, and that personnel management systems have not consistently identified and tracked assigned personnel. We also found that DOD headquarters organizations have neither systematically determined their workforce requirements nor established procedures to periodically reassess these requirements, as outlined in DOD and other guidance, limiting DOD’s ability to identify efficiencies and limit headquarters growth in these organizations. In 2016, DOD established a revised definition for “major headquarters activities,” but the one department-wide data set that identifies military and civilian positions by specific DOD headquarters functions contains unreliable data because DOD has not aligned these data with its revised definition. By effectively monitoring department-wide business transformation efforts, establishing management controls for its business system investments, developing guidance to monitor service acquisitions, and improving the reliability of its headquarters data, DOD will be better positioned to identify opportunities to gain additional efficiencies in its business operations. Table 6 highlights key actions DOD should take to help achieve efficiencies across its defense business operations. Since 2011, we have directed 49 recommendations to DOD in this area, of which 38 recommendations remain open, including 8 priority recommendations. Our body of work at DOD has identified four cross-cutting factors that have affected DOD’s ability to address its key mission challenges: (1) the lack of sustained leadership involvement, (2) a misalignment between programs and budgets and resources, (3) ineffective strategic planning and performance monitoring, and (4) an ineffective management control system. Lack of sustained leadership involvement: Sustained leadership involvement is critical to DOD’s success in addressing long-standing management challenges, implementing lasting department-wide reforms, and achieving greater accountability. Since 1990, we have reported on leadership challenges across the department. In 2007, we suggested to Congress that it consider enacting legislation to establish a full-time CMO position at DOD with significant authority and experience and a sufficient term to provide focused and sustained leadership over the department’s business functions. In 2008, Congress passed the National Defense Authorization Act for Fiscal Year 2008, which designated the Deputy Secretary of Defense as the CMO and created the DCMO position, which differed from our recommendation that DOD create a separate full-time CMO position as an Executive Level II position that reports directly to Secretary of Defense. Since 2008, DOD has made some progress in sustaining leadership over its business functions, including developing specific roles and responsibilities for the CMO and DCMO and establishing a senior-level governance forum co-chaired by the DCMO and the DOD Chief Information Officer to oversee the department’s business functions. However, DOD has had challenges retaining individuals in some of its top leadership positions, and significant work remains to address long-standing challenges in the management of DOD’s business functions (see fig. 11). Nine years after the creation of the CMO and DCMO positions, all of DOD’s business functions remain on our High-Risk List. For example, with respect to financial management, in 2005 we reported that DOD had not established a framework to oversee and integrate financial management improvement efforts, such as developing remediation plans and accountability mechanisms to ensure progress and lasting financial management reform. In February 2017 we further reported that DOD had not reasonably assured that its components had effective leadership and processes in place to substantially improve DOD’s financial management operations and audit readiness. Consequently, although DOD leadership has shown a commitment to financial management reform and established associated plans and guidance, we have seen little tangible evidence of progress in achieving significant financial management reforms. Regarding business system modernization, we reported in 2005 that DOD leadership had not ensured that these investments were effectively implementing acquisition best practices so that each investment might deliver the expected benefits and capabilities on time and within budget. We further reported in February 2017 that the DCMO and other DOD stakeholders did not yet have the full range of management controls in place needed to effectively oversee these investments. Congress has remained concerned about DOD’s leadership challenges and, in the National Defense Authorization Act for Fiscal Year 2017, established a new CMO position and replaced the Under Secretary of Defense for Acquisition, Technology, and Logistics with the Under Secretary of Defense for Research and Engineering and the Under Secretary of Defense for Acquisition and Sustainment. These new positions provide an opportunity to enhance the department’s leadership focus on its key mission challenges, but DOD will need to clearly define key responsibilities and authorities to help ensure that these positions can effectively drive transformation efforts. For example, section 901 of the National Defense Authorization Act for Fiscal Year 2017 requires the CMO to have extensive experience managing large or complex organizations, and to establish policies on and supervise all of the department’s business operations. In addition, the provision requires the CMO to have the authority to direct the Secretaries of the military departments and the heads of all other DOD organizations with regard to business operations. This provision further requires that the Secretary of Defense conduct a review of DOD leadership positions and subordinate organizations, and define relationships, including the placement of the CMO within the department, to inform how the position will be implemented. Our prior work has found that a CMO position similar to the one set forth in the provision may help DOD address its challenges in implementing business transformation and other efforts, and provide focused and sustained leadership over them. However, the provision does not specify how the CMO will carry out its authority, what the CMO’s precedence in the department will be, what the CMO’s executive schedule level will be, and how long the term of service for the position will be. The new Under Secretary for Research and Engineering will have responsibility for technological development and testing, while the Under Secretary for Acquisition and Sustainment will have responsibility for acquiring and sustaining DOD’s capabilities and overseeing the modernization of nuclear forces and the development of capabilities to counter weapons of mass destruction. We have reported on the potential benefits of separating technology development from system development for acquisition programs, but the National Defense Authorization Act for Fiscal Year 2017 did not contain any provisions for DOD to realign its acquisition process along these lines. Further, in considering the cumulative effect of these changes, it is unclear whether the newly established acquisition Under Secretary positions will have adequate authority to address long-standing issues in holding the military departments and service acquisition executives accountable for major defense acquisition programs before they start and for their execution once they begin. We will continue to assess the department’s leadership commitment to addressing DOD’s key mission challenges, and we will also assess the impact of these new positions. Misalignment between programs and resources and budgets: In January 2017, we reported that the federal government faces an unsustainable long-term fiscal path and that the Congress and the new administration will need to consider difficult policy choices in the short term regarding federal spending. However, since 2005, we have reported that DOD’s approach to planning and budgeting often results in a mismatch between the department’s programs and available resources. For example, while we have found that DOD has reported some progress in implementing acquisition reforms to control program costs for its weapon systems, we have reported that DOD has inconsistently implemented knowledge-based acquisition leading practices to estimate costs for its $1.5 trillion portfolio of large weapon systems. Because DOD does not routinely identify accurate and realistic resource needs, unexpected growth in some of its major acquisition programs—including the F-35, AIM-9X Block II Air-to-Air Missile, and MQ-8 Fire Scout—will affect numerous programs that simultaneously vie for significant funding commitments. In addition, we have reported that the Missile Defense Agency, which has spent approximately $90 billion since 2002, has been unable to fully estimate all life-cycle costs and stabilize acquisition funding baselines in order to assess affordability over time, which has affected DOD’s ability to make investment and program decisions that align with the budget. Further straining DOD’s budget is a proposed near-simultaneous recapitalization and modernization of all three legs of the nuclear triad, which DOD estimates will cost around $270 billion over the next two decades. Since 2001, DOD has increasingly relied on funding for overseas contingency operations (OCO) to pay for operating costs—those for day- to-day operations that are typically funded through the base budget— raising uncertainty as to whether over the long term the department can afford the forces and weapon systems and other programs it currently maintains. We reported in January 2017 that the amount of OCO appropriations DOD considers as nonwar related increased from about 4 percent in fiscal year 2010 to about 12 percent in fiscal year 2015, an increase that reflects DOD’s expanding the use of OCO appropriations from contingency-related operations in Iraq and Afghanistan to other activities, such as efforts to deter Russia and reassure U.S. allies. Senior DOD officials have acknowledged that costs funded by OCO appropriations are likely to endure after contingency operations have ceased. However, DOD has not developed a plan to transition enduring OCO-funded costs to the base budget, and senior level DOD officials maintain that DOD will be unable to make this transition until there is sufficient relief from the sequester-level discretionary budget caps established in the Budget Control Act of 2011. In January 2017, we recommended that DOD collaborate with the Office of Management and Budget to modify guidance on what costs should be included in OCO funding requests, and that DOD develop a complete and reliable estimate of its enduring OCO-funded costs and report those costs with its future base budget requests. Ineffective strategic planning and performance monitoring: For more than a decade we have reported on strategic planning and performance monitoring challenges that have affected the efficiency and effectiveness of DOD’s operations both at the strategic readiness level and across all of DOD’s major business areas–including contract management, financial management, and supply chain management. DOD has made some important progress in these areas, such as implementing a corrective action plan and demonstrating sustained progress on the management of its spare parts from 2010 through 2017. As a result, we removed the inventory management component of the supply chain management high- risk area—an issue that has been on our High-Risk List since 1990. However, more work remains. For example, in 2016 we reported that DOD may be unable to determine the effectiveness of the military departments’ respective readiness recovery efforts or assess the departments’ ability to meet the demands of the National Military Strategy because DOD has not used effective strategic planning practices–that is, identified goals and metrics for measuring progress against the goals and evaluated performance and progress toward meeting the goals. We also reported that DOD had made progress in establishing an effective strategic plan to integrate business transformation efforts across DOD’s major business areas, but that DOD continued to lack a strategic planning process that defined a role for the military departments in those efforts. We have also reported that in its performance monitoring efforts DOD has missed opportunities to hold officials accountable for progress made toward DOD-identified goals and milestones, to make timely and well- informed actions to address identified challenges, and to encourage continuous improvements in performance across its major business functions. We reported in July 2015, for example, that DOD’s performance monitoring practices have been inconsistent with government-wide requirements because DOD had not conducted performance reviews that were led by the CMO or other top agency leaders at least once a quarter to review progress on all agency priority goals, which cover many of DOD’s major business functions, or to discuss at-risk goals and improvement strategies, among other issues. We also reported that DOD’s ability to make further progress in the business systems modernization and financial management business areas have been hindered by limitations in its performance monitoring. For example, DOD has not developed an action plan to monitor progress in making business system improvements and has not obtained complete, detailed information on all corrective action plans from the military services to fully monitor and assess DOD’s progress in resolving its financial management deficiencies. Ineffective management control system: A critical component of an effective management control system is the use of quality information to inform day-to-day decision making. However, we have also reported since 2011 that DOD does not have quality information on costs related to mission critical programs, the department’s headquarters functions, and the department’s major business areas. Without quality information regarding the costs associated with DOD’s mission-critical weapon systems, for example, DOD will be unable to effectively assess the affordability of the programs that support them. Among other things, DOD will be unable to accurately estimate the cost to recapitalize the nuclear triad, to complete the acquisition and deployment of the F-35, and to evaluate gaps that could result from the divestment of the A-10. We also reported in 2016 that DOD does not have cost data associated with functions within headquarters organizations, including within its business areas, which is needed to facilitate the identification of opportunities for consolidation or elimination of positions across an organization. Absent internal control systems needed to help ensure quality cost information, DOD’s ability to provide meaningful information to Congress to inform future budget and funding decisions is hindered. We have reported that DOD also faces long-standing challenges in implementing an effective management control system to improve accountability and effectively and efficiently achieve its mission. For example, since 2005 our body of work on DOD’s financial management has found that DOD has been unable to receive an audit opinion on its financial statements because of its serious financial management problems, including material internal control weaknesses. DOD has begun to address our 2014 recommendation about internal control weaknesses, identifying internal controls as a critical capability in DOD’s audit readiness guidance. However, as of April 2017, the Defense Finance and Accounting Service had not fully implemented the steps needed to address requirements in DOD’s audit readiness guidance related to planning, testing, and implementing corrective actions. As a result, the Defense Finance and Accounting Service does not have assurance that its processes, systems, and controls can produce and maintain accurate, complete, and timely financial management information for the approximately $200 billion in contract payments it annually processes on behalf of DOD components. Implementing internal control steps, to include performing required testing of contract pay processes and documenting how previously identified internal control deficiencies have been addressed, can help ensure that DOD implements, maintains, and sustains the necessary financial improvements to effectively carry out its contract pay mission. DOD plays a critically important role in protecting the security of the United States while simultaneously working to maintain regional security and stability abroad. The department must fulfill this vital role while facing a complex and changing national security environment with unique and rapidly evolving threats. Although the United States’ military strength is unparalleled across the globe, the department faces a myriad of influences that pose obstacles to its effectiveness and progress, including budgetary strains and uncertainty, and growing and evolving demands that challenge its ability to restore needed levels of readiness after more than a decade of war. At the same time, the department must be more efficient in managing the significant resources entrusted to it, including the billions of dollars invested in acquiring major weapon systems, as well as its vast and complex business operations supporting its warfighting mission. The department has made noteworthy progress addressing key challenges that affect its mission but significant work remains, and the department will need to continue to make difficult decisions regarding reaching an affordable balance between investments in current needs and new capabilities. We have issued hundreds of reports and made thousands of recommendations to DOD to help position it to address its challenges. While DOD has taken action to implement many of them, it lags behind the rest of the federal government in implementing our recommendations, with 1,037 recommendations remaining open, including 78 priority recommendations that we believe require top leadership attention and that, if implemented, could result in significant financial savings and increased efficiencies. Implementing these recommendations would go a long way toward addressing the factors that have consistently affected DOD’s ability to efficiently and effectively meet the department’s mission, as well as position the department to make significant and sustained progress across its key challenges. The need for progress will be critical in an era of increased uncertainty both domestically and abroad, and must continue to be the department’s top priority. We provided a draft of this report to DOD for comment. In its comments, reproduced in appendix III, DOD stated that although this report is a review of progress made of previous GAO audits and no new recommendations were issued, the department stands by its responses and concurrence to taking the requisite actions needed to address all previous recommendations. DOD also provided technical comments, which we incorporated into the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Deputy Secretary of Defense, and the Deputy Chief Management Officer. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3404 or berrickc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our work identified five key challenges that impact Department of Defense’s (DOD) ability to accomplish its mission–specifically, the need for DOD to (1) rebalance forces and rebuild readiness in an evolving global security environment; (2) mitigate threats to cyberspace and expand cyber capabilities; (3) control the escalating costs of programs, such as certain weapon systems acquisitions and military health care, and manage its finances; (4) strategically manage its human capital; and (5) achieve greater efficiencies in defense business operations. We have listed related work completed since 2011 for each of these challenge areas below. Also listed below are our cross-cutting products, such as the high-risk; duplication, overlap, and fragmentation; and key issues products. Overseas Contingency Operations: OMB and DOD Should Revise the Criteria for Determining Eligible Costs and Identify the Costs Likely to Endure Long Term. GAO-17-68. Washington, D.C.: January 18, 2017. Air Force Training, Further Analysis and Planning Needed to Improve Effectiveness. GAO-16-864. Washington, D.C.: September 19, 2016. Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan. GAO-16-841. Washington, D.C.: September 7, 2016. Force Structure: Better Information Needed to Support Air Force A-10 and Other Future Divestment Decisions. GAO-16-816. Washington, D.C.: August 24, 2016. Military Readiness: Progress and Challenges in Implementing the Navy’s Optimized Fleet Response Plan. GAO-16-466R. Washington, D.C.: May 2, 2016. F-35 Sustainment: DOD Needs a Plan to Address Risks Related to Its Central Logistics System. GAO-16-439. Washington, D.C.: April 14, 2016. Nuclear Weapons Sustainment: Improvements Made to Budget Estimates Report, but Opportunities Remain to Further Enhance Transparency. GAO-16-23. Washington, D.C.: December 10, 2015. Regionally Aligned Forces: DOD Could Enhance Army Brigades’ Efforts in Africa by Improving Activity Coordination and Mission-Specific Preparation. GAO-15-568. Washington, D.C.: August 26, 2015. Nuclear Weapons Sustainment: Improvements Made to Budget Estimates, but Opportunities Exist to Further Enhance Transparency. GAO-15-536. Washington, D.C.: July 31, 2015. Special Operations Forces: Opportunities Exist to Improve Transparency of Funding and Assess Potential to Lessen Some Deployments. GAO-15-571. Washington, D.C.: July 16, 2015. Navy Force Structure: Sustainable Plan and Comprehensive Assessment Needed to Mitigate Long-Term Risks to Ships Assigned to Overseas Homeports. GAO-15-329. Washington, D.C.: May 29, 2015. F-35 Sustainment: Need for Affordable Strategy, Greater Attention to Risks, and Improved Cost Estimates. GAO-14-778. Washington, D.C.: September 23, 2014. Security Force Assistance: More Detailed Planning and Improved Access to Information Needed to Guide Efforts of Advisor Teams in Afghanistan. GAO-13-381. Washington, D.C.: April 30, 2013. Missile Defense: Opportunity to Refocus on Strengthening Acquisition Management. GAO-13-432. Washington, D.C.: April 26, 2013. Defense Civil Support: DOD Needs to Identify National Guard’s Cyber Capabilities and Address Challenges in Its Exercises. GAO-16-574. Washington, D.C.: September 6, 2016. Civil Support: DOD Needs to Clarify Its Roles and Responsibilities for Defense Support of Civil Authorities during Cyber Incidents. GAO-16-332. Washington, D.C.: April 4, 2016. Defense Infrastructure: Improvements in DOD Reporting and Cybersecurity Implementation Needed to Enhance Utility Resilience Planning. GAO-15-749. Washington, D.C.: July 23, 2015. Insider Threats: DOD Should Strengthen Management and Guidance to Protect Classified Information and Systems. GAO-15-544. Washington, D.C.: June 2, 2015. Defense Cybersecurity: DOD Needs to Better Plan for Continuity of Operations in a Degraded Cyber Environment and Provide Increased Oversight. GAO-14-404SU. Washington, D.C.: April 1, 2014. (For Official Use Only) DOD Financial Management: Significant Efforts Still Needed for Remediating Audit Readiness Deficiencies. GAO-17-85. Washington, D.C.: February 9, 2017. Financial Audit: U.S. Government’s Fiscal Years 2015 and 2016 Consolidated Financial Statements. GAO-17-283R. Washington, D.C.: January 12, 2017. Littoral Combat Ship and Frigate: Congress Faced with Critical Acquisition Decisions. GAO-17-262T. Washington, D.C.: December 1, 2016. DOD Financial Management: Improvements Needed in the Navy’s Audit Readiness Efforts for Fund Balance with Treasury. GAO-16-47. Washington, D.C.: August 19, 2016. DOD Financial Management: Greater Visibility Needed to Better Assess Audit Readiness for Property, Plant, and Equipment. GAO-16-383. Washington, D.C.: May 26, 2016. Defense Infrastructure: More Accurate Data Would Allow DOD to Improve the Tracking, Management, and Security of Its Leased Facilities. GAO-16-101. Washington, D.C.: March 15, 2016. Military Base Realignments and Closures: More Guidance and Information Needed to Take Advantage of Opportunities to Consolidate Training. GAO-16-45. Washington, D.C.: February 18, 2016. DOD Financial Management: Continued Actions Needed to Address Congressional Committee Panel Recommendations. GAO-15-463. Washington, D.C.: September 28, 2015. Defense Health Care Reform: Actions Needed to Help Ensure Defense Health Agency Maintains Implementation Progress. GAO-15-759. Washington, D.C.: September 10, 2015. F-35 Joint Strike Fighter: Assessment Needed to Address Affordability Challenges. GAO-15-364. Washington, D.C.: April, 14, 2015. Improper Payments: TRICARE Measurement and Reduction Efforts Could Benefit from Adopting Medical Record Reviews. GAO-15-269. Washington, D.C.: February 18, 2015. Defense Infrastructure: DOD Needs to Improve Its Efforts to Identify Unutilized and Underutilized Facilities. GAO-14-538. Washington, D.C.: September 8, 2014. Defense Health Care Reform: Additional Implementation Details Would Increase Transparency of DOD’s Plans to Enhance Accountability. GAO-14-49. Washington, D.C.: November 6, 2013. Ford Class Carriers: Lead Ship Testing and Reliability Shortfalls Will Limit Initial Fleet Capabilities. GAO-13-396. Washington, D.C.: September 5, 2013. Military Bases: DOD Has Processes to Comply with Statutory Requirements for Closing or Realigning Installations. GAO-13-645. Washington, D.C.: June 27, 2013. DOD Financial Management: Significant Improvements Needed in Efforts to Address Improper Payment Requirements. GAO-13-227. Washington, D.C.: May 13, 2013. Defense Infrastructure: Improved Guidance Needed for Estimating Alternatively Financed Project Liabilities. GAO-13-337. Washington, D.C.: April 18, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: March 7, 2013. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Defense Acquisition Workforce: DOD Has Opportunities to Further Enhance Use and Management of Development Fund. GAO-17-332. Washington, D.C.: March 28, 2017. Military Compensation: Additional Actions Are Needed to Better Manage Special and Incentive Pay Programs. GAO-17-39. Washington, D.C.: February 3, 2017. Unmanned Aerial Systems: Air Force and Army Should Improve Strategic Human Capital Planning for Pilot Workforces. GAO-17-53. Washington, D.C.: January 31, 2017. DOD Inventory of Contracted Services: Timely Decisions and Further Actions Needed to Address Long-Standing Issues. GAO-17-17. Washington, D.C.: October 31, 2016. DOD Civilian and Contractor Workforces: Additional Cost Savings Data and Efficiencies Plan Are Needed. GAO-17-128. Washington, D.C.: October 12, 2016. Unmanned Aerial Systems: Further Actions Needed to Fully Address Air Force and Army Pilot Workforce Challenges. GAO-16-527T. Washington, D.C.: March 16, 2016. Civilian and Contractor Workforces: Complete Information Needed to Assess DOD’s Progress for Reductions and Associated Savings. GAO-16-172. Washington, D.C.: December 23, 2015. Defense Acquisition Workforce: Actions Needed to Guide Planning Efforts and Improve Workforce Capability. GAO-16-80. Washington, D.C.: December 14, 2015. DOD Inventory of Contracted Services: Actions Needed to Help Ensure Inventory Data Are Complete and Accurate. GAO-16-46. Washington, D.C.: November 18, 2015. Military Recruiting: Army National Guard Needs to Continue Monitoring, Collect Better Data, and Assess Incentives Programs. GAO-16-36. Washington D.C.: November 17, 2015. Unmanned Aerial Systems: Actions Needed to Improve DOD Pilot Training. GAO-15-461. Washington, D.C.: May 14, 2015. Defense Contractors: Additional Actions Needed to Facilitate the Use of DOD’s Inventory of Contracted Services. GAO-15-88. Washington, D.C.: November 19, 2014. Human Capital: DOD Should Fully Develop Its Civilian Strategic Workforce Plan to Aid Decision Makers. GAO-14-565. Washington, D.C.: July 9, 2014. Air Force: Actions Needed to Strengthen Management of Unmanned Aerial System Pilots. GAO-14-316. Washington, D.C.: April 10, 2014. Human Capital: Opportunities Exist to Further Improve DOD’s Methodology for Estimating the Costs of Its Workforces. GAO-13-792. Washington, D.C.: September 25, 2013. Human Capital: Additional Steps Needed to Help Determine the Right Size and Composition of DOD’s Total Workforce. GAO-13-470. Washington, D.C.: May 29, 2013. Human Capital: Critical Skills and Competency Assessments Should Help Guide DOD Civilian Workforce Decisions. GAO-13-188. Washington, D.C.: January 17, 2013. Human Capital: DOD Needs Complete Assessments to Improve Future Civilian Strategic Workforce Plans. GAO-12-1014. Washington, D.C.: September 27, 2012. Defense Acquisition Workforce: Improved Processes, Guidance, and Planning Needed to Enhance Use of Workforce Funds. GAO-12-747R. Washington, D.C.: June 20, 2012. Defense Business Transformation: DOD Should Improve Its Planning with and Performance Monitoring of the Military Departments. GAO-17-9. Washington, D.C.: December 7, 2016. Defense Headquarters: Improved Data Needed to Better Identify Streamlining and Cost Savings Opportunities by Function. GAO-16-286. Washington, D.C.: June 30, 2016. DOD Major Automated Information Systems: Improvements Can Be Made in Reporting Critical Changes and Clarifying Leadership Responsibility. GAO-16-336. Washington, D.C.: March 30, 2016. DOD Service Acquisition: Improved Use of Available Data Needed to Better Manage and Forecast Service Contract Requirements. GAO-16-119. Washington, D.C.: February 18, 2016. Defense Satellite Communications: DOD Needs Additional Information to Improve Procurements. GAO-15-459. Washington, D.C.: July 17, 2015. DOD Business Systems Modernization: Additional Action Needed to Achieve Intended Outcomes. GAO-15-627. Washington, D.C.: July 16, 2015. Defense Headquarters: DOD Needs to Reassess Personnel Requirements for the Office of the Secretary of Defense, Joint Staff, and Military Service Secretariats. GAO-15-10. Washington, D.C.: January 21, 2015. Defense Headquarters: DOD Needs to Reevaluate Its Approach for Managing Resources Devoted to the Functional Combatant Commands. GAO-14-439. Washington, D.C.: June 26, 2014. DOD Business Systems Modernization: Further Actions Needed to Address Challenges and Improve Accountability. GAO-13-557. Washington, D.C.: May 17, 2013. Defense Headquarters: DOD Needs to Periodically Review and Improve Visibility of Combatant Commands’ Resources. GAO-13-293. Washington, D.C.: May 15, 2013. DOD Business Systems Modernization: Governance Mechanisms for Implementing Management Controls Need to Be Improved. GAO-12-685. Washington, D.C.: June 1, 2012. Defense Headquarters: Further Efforts to Examine Resource Needs and Improve Data Could Provide Additional Opportunities for Cost Savings. GAO-12-345. Washington, D.C.: March 21, 2012. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-17-333SP. Washington, D.C.: March 30, 2017. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. The Nation’s Fiscal Health: Action Is Needed to Address the Federal Government’s Fiscal Future. GAO-17-237SP. Washington, D.C.: January 17, 2017. Performance and Accountability Report, Fiscal Year 2016. GAO-17-1SP. Washington, D.C.: November 15, 2016. 2016 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-16-375SP. Washington, D.C.: April 13, 2016. Managing for Results: Agencies Report Positive Effects of Data-Driven Reviews on Performance but Some Should Strengthen Practices. GAO-15-579. Washington, D.C.: July 7, 2015. 2015 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-15-404SP. Washington, D.C.: April 14, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. DOD Financial Management: The Defense Finance and Accounting Service Needs to Fully Implement Financial Improvements for Contract Pay. GAO-14-10. Washington, D.C.: June 23, 2014. 2012 Annual Report Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 1, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Defense Acquisitions: Assessments of Major Weapon Programs. GAO-04-248. Washington, D.C.: March 31, 2004. Military Readiness: New Reporting System Is Intended to Address Long- Standing Problems, but Better Planning Is Needed. GAO-03-456. Washington, D.C.: March 28, 2003. In addition to the contact named above, Matt Ullengren (Assistant Director); Bonnie Anderson; Lori Atkinson; Jason Bair; Vincent Balloon; Thomas Baril; Tracy Barnes; Daniel Berg; Margaret Best; Arkelga Braxton; Penney Harwell Caramia; Angela Clowers; Kevin Copping; Alissa Czyz; Timothy DiNapoli; Debra Draper; Gary Engel; Brenda S. Farrell; Gayle Fischer; Gina Flacco; Paul Francis; Brent Helt; Gina Hoffman; Michael Holland; Charles Michael Johnson, Jr.; Mae Jones; Joseph Keener; Asif Khan; Joseph W. Kirschbaum; Brian Lepore; Michele Mackin; Ned Malone; Judith McCloskey; Jacqueline McColl; Valerie Melvin; Zina Dache Merritt; J. Christopher Mihm; Jamilah Moon; Elizabeth Morris; Marcus Oliver; John Pendleton; Natalia Pena; Richard Powelson; Erika Prochaska; William Reinsberg; James Reynolds; Cary B. Russell; Tina Won Sherman; Andrew Von Ah; Shana Wallace; Chris Watson; and Kristy Williams made key contributions to this report.
The United States faces a complex national security environment, to include strategic challenges presented by traditional state actors and destabilizing nonstate actors, such as the Islamic State of Iraq and Syria. Recognizing these challenges, DOD has emphasized the importance of providing forces that are capable of performing a full range of missions. GAO has issued hundreds of reports that bring greater attention to areas where DOD can strengthen its operations to more efficiently and effectively meet its mission. This report identifies (1) key challenges affecting DOD's ability to accomplish its mission, progress made on these challenges, and work remaining; and (2) factors that have affected DOD's ability to address these key challenges. This report builds on GAO's past work with an emphasis on reports issued since 2011. GAO also analyzed DOD information on recent actions taken in response to GAO's prior work. The Department of Defense (DOD) faces five key challenges that significantly affect the department's ability to accomplish its mission. These include the need to (1) rebalance forces and rebuild readiness; (2) mitigate threats to cyberspace and expand cyber capabilities; (3) control the escalating costs of programs, such as certain weapon systems acquisitions and military health care, and better manage its finances; (4) strategically manage its human capital; and (5) achieve greater efficiencies in defense business operations. DOD has demonstrated progress addressing challenges, but significant work remains. Specifically: Rebalance forces and rebuild readiness : The military services today are generally smaller and less combat ready than they have been in many years, and each military service has been forced to cut critical needs in areas such as training, maintenance, and modernization due to budgetary constraints, according to DOD. Officials said that the result of the current state of readiness is that military forces are not strong enough to protect vital U.S. national security interests from worldwide threats. DOD has pursued plans to strengthen military capabilities, but must take key actions to rebalance, rebuild, and modernize the capabilities of U.S. military forces. For example, DOD needs to take further steps to meet the demands of geographic commanders and examine whether there are opportunities to reduce the high demand on special operations forces. DOD also needs to provide decision makers with complete and accurate budget and cost information to make well-informed decisions on weapon systems modernization investments and mitigate potential risks to certain modernization initiatives, including regarding the F-35 aircraft—a program on which DOD plans to spend over $1 trillion to operate and sustain over its life cycle. The military services have plans underway to rebuild readiness for portions of their military forces, but these initiatives are at risk without more comprehensive planning and an approach to measure progress in attaining goals (see table). Since 2011, GAO has directed 39 recommendations to DOD in this area, of which 35 remain open, including 5 priority recommendations. Mitigate threats to cyberspace and expand cyber capabilities : In February 2016, the Director of National Intelligence identified cyber threats as first among strategic threats to the United States, surpassing terrorism. According to the 2016 Federal Information Security Management Act report, more than 30,000 data security incidents compromised federal information systems during fiscal year 2016, 16 of which were categorized as major incidents. DOD has become increasingly reliant on the Internet and other networks, which are central to its military operations and enable essential services. At the same time, the vulnerability of its cyber networks has grown significantly, due in part to the increase in the severity of cyber attacks. DOD has made progress in developing a cyber strategy to defend its networks and protect the nation from cyber attacks, but needs to take additional actions to improve its planning for the continuity of operations in a degraded cyber environment, such as providing defense organizations with guidance and training to practice responses during exercises. DOD also needs to take further action to strengthen its insider threat awareness program to address the increased risk of the unauthorized disclosure of classified information from defense information systems and to improve the visibility and oversight of the cyber capabilities of all National Guard units, such as computer network defense teams that could be used during a cyber incident. Since 2011, GAO has directed 33 recommendations to DOD in unclassified and sensitive but unclassified reports, of which 14 remain open, including 5 priority recommendations. Control escalating costs and manage finances : DOD's $580 billion fiscal year 2016 budget accounts for nearly half of the federal government's discretionary spending, and DOD's costs are growing. DOD plans to invest $574 billion to develop and acquire 78 major acquisition programs through fielding, such as the F-35 and the Littoral Combat Ship, while annual military health care costs are expected to increase from about $60 billion in fiscal year 2017 to about $70 billion by fiscal year 2028. Further, DOD remains one of the few federal entities that cannot demonstrate an ability to accurately account for and reliably report its spending or assets. DOD has undertaken a series of reform initiatives to control costs and improve its financial management, but needs to more consistently implement leading acquisition practices to manage the costs of its weapon systems. DOD also needs to better address improper payments to control rising costs in the military health system, which has experienced a 217 percent increase in costs since 2001 (see fig.). Further, DOD should take steps to identify underutilized space in its facilities to reduce its reliance on costly leased facilities. Finally, DOD needs to remediate financial management deficiencies, which prevent it from producing auditable financial statements and result in inadequate financial and other information available to DOD to manage its operations. Since 2011, GAO has directed 79 recommendations to DOD in this area, of which 72 remain open, including 52 priority recommendations. Strategically manage human capital : DOD estimates that it will spend nearly $180 billion in fiscal year 2017 on pay and benefits for its military personnel and about $70 billion for its civilian employees. Taken together, funding for military and civilian pay and benefits represents nearly 50 percent of DOD's budget in fiscal year 2016 (see fig.). DOD also estimates that it spent about $115 billion on certain contractor-provided services in fiscal year 2015, although we have raised questions regarding the reliability of DOD's information on its contractor workforce. Current budget and long-term fiscal pressures on the department increase the importance of strategically managing DOD's human capital. DOD has taken steps to develop better information about the skill sets possessed and needed within the department's military, civilian, and contractor workforces, but needs to take further actions to complete a workforce mix assessment, improve the methodology for estimating workforce costs, and address skill gaps in critical workforces. DOD should also establish a comprehensive compensation strategy for its military personnel to help achieve its recruiting and retention goals, including a cost-effective approach for managing the $3.4 billion the department spent in fiscal year 2015 on special and incentive pays for active-duty service members. Since 2011, GAO has directed 67 recommendations to DOD in this area, of which 64 remain open. Achieve greater efficiencies in defense business operations : DOD spends billions of dollars each year acquiring business systems and contractor-provided services to support the warfighter. In 2014 alone, DOD obligated $85 billion on three types of contractor-provided services—including an amount obligated for knowledge-based and research and development services that was more than double what the department spent to purchase aircraft, land vehicles, and ships. DOD has emphasized the need to improve its business practices and reduce overhead and free up resources for higher priorities, but needs to take additional actions to drive business transformation efforts, implement management controls for its business systems investments, and develop guidance to manage the acquisition of contracted services. DOD also needs to improve the reliability of its data to enable it to properly size its headquarters organizations, which have experienced significant growth; to accomplish missions; and identify potential cost savings. Since 2011, GAO has directed 49 recommendations to DOD in this area, of which 38 remain open, including 8 priority recommendations. GAO's prior work identified four cross-cutting factors that have affected DOD's ability to address the department's key challenges. Specifically: Lack of sustained leadership involvement: More than 9 years after Congress designated the Deputy Secretary of Defense as the Chief Management Officer and created the Deputy Chief Management Officer position to provide leadership over the department's business functions, all of DOD's business areas remain on our High-Risk List—areas that are vulnerable to waste, fraud, or mismanagement (see fig.). In December 2016, Congress established a separate Chief Management Officer position from the Deputy Secretary of Defense and replaced the Under Secretary of Defense for Acquisition, Technology, and Logistics with two new Under Secretary positions to further address DOD's leadership challenges. These new positions provide an opportunity to enhance DOD's leadership focus on DOD's key challenges, but DOD will need to clearly define the key responsibilities and authorities for these positions to help ensure that they can effectively drive transformation efforts. Misalignment between programs and resources and budgets: The federal government faces an unsustainable long-term fiscal path, and the Congress and the new administration will need to consider difficult policy choices in the short term regarding federal spending. However, since 2005, GAO has reported that DOD's approach to planning and budgeting often results in a mismatch between the department's programs and available resources. As a result, DOD faces significant affordability challenges for some of its major acquisition programs that have unsustainable cost estimates and that will vie for significant funding commitments. Ineffective strategic planning and performance monitoring: GAO has reported since 2005 on strategic planning and performance monitoring challenges that have affected the efficiency and effectiveness of DOD's operations, both at the strategic readiness level and across all of DOD's major business areas–including contract management, financial management, and supply chain management. DOD has missed opportunities to hold officials accountable for progress made toward meeting goals and milestones, make timely and well-informed actions to address identified challenges, and encourage continuous improvements in performance across its major business functions. Ineffective management control system: DOD has not addressed long-standing challenges in implementing an effective management control system to improve accountability and effectively and efficiently achieve its mission. DOD does not have quality information on costs related to mission critical programs, such as weapon systems, and the department is unable to effectively assess the affordability of the programs that support them. Since 2005, GAO has reported on internal control deficiencies with DOD's financial management that have contributed to inconsistent and sometimes unreliable reports to Congress on weapon system operating and support costs, among other areas. This inconsistent and unreliable reporting limits the visibility that Congress needs to effectively oversee defense programs, and impairs its ability to make cost-effective choices. GAO has made approximately 3,100 recommendations to DOD since 2006. Of these, about 1,037 remain open, including 78 priority recommendations that, if implemented, could significantly improve DOD's operations. In commenting on this report, DOD stated that although the report made no new recommendations, the department stands by its responses and concurrence to taking the requisite actions needed to address all previous recommendations.
In 1985, Congress required the Department of the Defense to destroy the U.S. stockpile of chemical agents and munitions and to establish an organization within the Army to manage the agent destruction program. Later, Congress also directed DOD to research and develop technological alternatives to incineration for disposing of chemical agents and munitions. These activities evolved into the Chem-Demil Program. The Chem-Demil Program includes the Chemical Stockpile Emergency Preparedness Program, created in 1988, to enhance the emergency management and response capabilities of communities near the storage sites in case of an accident. The Nonstockpile Chemical Materiel Product was added in 1993 to destroy any chemical weapons or materiel not included in the stockpile disposal program. The Chemical Stockpile Disposal Project has or plans to use incineration to destroy chemical agents at five sites: Johnston Atoll in the Pacific Ocean; Anniston, Alabama; Pine Bluff, Arkansas; Umatilla, Oregon; and Tooele, Utah. Tooele is the only site with a facility currently operating. The three other stateside facilities are scheduled to begin operations in fiscal years 2002-2003. The Johnston Atoll facility has finished destroying its stockpile and is being closed. The Alternative Technologies and Approaches Project will use non-incineration methods (such as agent neutralization by chemical treatment) to destroy agents in bulk containers at Newport, Indiana, and Aberdeen, Maryland. The Assembled Chemical Weapons Assessment Program is also researching alternative methods to destroy agents in weapons at Pueblo, Colorado, and Blue Grass, Kentucky. The Office of the Secretary of Defense and the Department of the Army share management roles and responsibilities in the Chem-Demil Program. The Program Manager of the Assembled Chemical Weapons Assessment Program reports to the Under Secretary of Defense for Acquisition, Technology, and Logistics. Thus, it is independent of the Program Manager for Chemical Demilitarization, who reports to the Assistant Secretary of the Army (Installations and Environment). In 1997, the United States ratified the Chemical Weapons Convention, a treaty committing member nations to dispose of selected chemical agents and materiel by April 29, 2007. In September 2001, the Army updated the life cycle cost estimate for the Chem-Demil Program from $15 billion to $24 billion. The new cost estimate extended the agent destruction schedule at four of the eight stateside sites beyond the initial target date of April 2007. Despite setbacks experienced at Johnston Atoll, Tooele, Utah, and Umatilla, Oregon, among others, the incineration program has successfully destroyed over 25 percent of the original stockpile (see table 1). The Lessons Learned Program was created in part because many different contractors were involved in the incineration program, and a system was needed to collect and preserve the institutional knowledge and acquired experience. The program is intended to identify, capture, evaluate, store, and share (implement) lessons learned during the different phases of the chemical stockpile demilitarization process. It collects two different kinds of lessons: “design” lessons covering engineering and technical processes and “programmatic” lessons involving management, quality assurance, emergency response, and public outreach. As criteria for assessing the knowledge management processes used by the Lessons Learned Program, we identified four of a number of federal organizations that practice knowledge management and operate lessons learned programs. In making our selections, we reviewed literature and spoke with knowledge management experts to find organizations recognized for their ability to share lessons or effectively manage knowledge. We identified the following organizations: the Center for Army Lessons Learned, the Department of Energy, the U.S. Army Corps of Engineers, and the Federal Transit Authority (for more details, see appendix II). There are two levels of authority involved in developing lessons learned from proposed engineering changes. A Configuration Control Board composed of headquarters staff in the Office of the Program Manager for Chemical Demilitarization has authority to approve, reject, or defer engineering change proposals that involve costs above a set limit or affecting multiple sites. The Field Configuration Control Boards have authority over changes at their sites involving lower costs. In September 2001, the Lessons Review Team (consisting of headquarters staff) was established to screen all lessons and engineering changes and provide the information needed to determine which lessons require a response from sites. For more information on the lessons learned process, see appendix III. The Lessons Learned Program has made valuable contributions in support of the Chemical Stockpile Disposal Project’s efforts to safely destroy the chemical stockpile. It has generally operated consistently with knowledge management principles and lessons sharing best practices and has successfully captured and shared thousands of lessons. However, the program does not apply or incorporate all knowledge management principles and lessons sharing best practices. For example, the program does not provide needed guidance for senior managers; it does not have formal a validation procedure to determine whether a problem has been fixed; and the database of lessons learned needs improvement. The Lessons Learned Program has contributed to the Chem-Demil Program’s goal of destroying the chemical weapons stockpile while promoting safety, maintaining schedule, and saving or avoiding costs. We found that the Chem-Demil Program’s management, through its leadership, encourages headquarters, field staff, and contractor personnel in the incineration program to use the Lessons Learned Program. It has provided funding and has established processes to capture, evaluate, store, and share lessons. It is committed to continuous improvement and has provided the technology needed to support the lessons learned process. Finally, it fosters a culture in which knowledge sharing is an important element of day-to-day operations. While it is difficult to quantify the benefits of each lesson, available data indicate that lessons learned have generally helped avoid on-the-job injuries (by using government-furnished-approved tools that are better suited to specific tasks), reduce costs (by improving the containers used to transport weapons), or maintain schedules (by improving the design of a socket to disassemble weapons). We also found that lessons from accidental releases of chemical agents at Johnston Atoll and Tooele, Utah, were implemented at other incineration sites under construction, thus incorporating improvements into the design of those new facilities. The Lessons Learned Program does not have guidance explaining how senior managers (at headquarters) should use it in support of their decision making process. Specifically, there is no guidance that defines the procedures to be followed when an alternative to a lesson is chosen or when a lesson is not implemented. Lessons learned guidance for another federal government agency recommends that lessons be used to optimize management decision making and to interact with other management tools such as reviews, investigations, root-cause analyses, and priorities. We reviewed documentation of lessons learned from incidents at the Johnston Atoll and Tooele, Utah facilities, and found that three other facilities—Anniston, Umatilla, and Pine Bluff—had not implemented a lesson that had evolved from problems with pipes in the pollution abatement systems. The Tooele site had used a superior and more expensive material (hastelloy) to fix their problem than the material used at the other sites. Headquarters decided not to implement the lesson at the three sites primarily because it would have involved higher initial costs. This decision ultimately caused serious safety concerns, higher costs, and delayed the schedule. In February 2002, pipes at Anniston had failures similar to those experienced at the first two sites. This raised safety concerns and resulted in a 4-week delay to replace the pipes with hastelloy. It is too early to determine whether the material used at the Umatilla and Pine Bluff sites will have the same problems. Although they need flexibility to manage the program, senior managers also need guidance to help make decisions that allow them to consider the potential impact of not implementing lessons learned. This process would include safety and risk analyses that can provide criteria should they decide not to adopt a lesson learned. There is no formal procedure to ensure that the lessons or corrective actions that have been implemented have fully addressed a deficiency. Chem-Demil Program guidance for engineering change proposals does require that changes be tracked and reported after implementation, but there is no similar requirement in the guidance for the Lessons Learned Program (which includes programmatic lessons). Both contractor and incineration project officials also confirmed that there are no procedures for monitoring the effectiveness of corrective actions. As a result, a problem could reoccur and affect safety and costs. As shown in figure 1, the Lessons Learned Program process does not contain the final validation stage (dashed line), which most knowledge management systems and Army guidance consider as a necessary step. As we previously reported, Army guidance states that lessons learned programs should have a means for testing or validating whether a corrective action has resolved a deficiency. The standard issued for another federal lessons learned program indicates that analyses should be made to evaluate improvements or to identify positive or negative trends. The standard also states that corrective actions associated with lessons learned should be evaluated for effect and prioritized. Without such a validation procedure in the architecture of the Lessons Learned Program, there is little assurance that problems have been resolved, and the possibility of repeating past mistakes remains. The lessons learned database includes about 3,400 issues, 3,055 engineering change proposals, and 2,198 lessons. But it is not easy to obtain fast and ready access to relevant information. Furthermore, the lessons in the database are not prioritized, making it difficult to identify which lessons are most important and which need to be verified and validated. It is important that an organization employ appropriate technology to support the participants of a lessons learned program. Having a technology be available does not automatically guarantee its use or acceptance. According to lessons sharing best practices, the goal of technology is to (1) match a solution to users’ needs, (2) establish a simple content structure so that items may be found easily and retrieved quickly, and (3) deliver only relevant information from all possible sources. According to database users we interviewed and surveyed, it is difficult to find lessons because the search tool requires very specific key words or phrases, involves multiple menus, and does not link lessons to specific events. As a result, some users are reluctant to use the database and thus may not benefit from it when making decisions that affect the program.Many users who responded to our survey stated that they experienced difficulties in searching the database, and some we interviewed described specific problems with searches. One described the database as “frustrating.” We tested the search tool and also had difficulty finding lessons linked to specific incidents. Users we interviewed made a number of suggestions to improve the Lessons Learned Program’s database, including improving the search capability, organizing by subject matter, ranking or prioritizing lessons, creating links to other documents, providing a Web-based link to the database, periodically purging redundant data, and making access screens more user-friendly. Furthermore, because the database does not prioritize lessons, managers may be unaware of some important areas or issues that need to be monitored or lessons that need to be reviewed and validated. By contrast, lessons learned processes used by the selected federal agencies include periodic reviews of the usefulness of lessons and the archiving of information that is no longer pertinent or necessary. The processes also include prioritizing lessons by risk, immediacy, and urgency. In 1998, the Army Audit Agency recommended that the database be purged or archived of obsolete items and that current and future lessons be prioritized. In September 2001, the Chem-Demil Program created a Lessons Review Team to begin identifying “critical” lessons (those requiring a response). But the team is not prioritizing lessons. Several other areas also did not adhere to knowledge management principles and lessons sharing best practices. For example, the Chem- Demil Program’s management plan does not explain how the Lessons Learned Program is to achieve its goals or define performance measures to assess effectiveness. Knowledge management principles stress the importance of leaders articulating how knowledge sharing will be used to support organizational goals. Furthermore, the Chem-Demil Program does not provide incentives to encourage involvement in the Lessons Learned Program. Lessons sharing best practices and knowledge management principles prescribe developing and using performance measures to determine the effectiveness of a program. In addition, the Lessons Learned Program currently surveys employees after workshops to measure their satisfaction; however, these surveys are not sufficient to assess the overall effectiveness of the program. The program is attempting to identify ways to measure the cost and benefits derived from lessons learned. Knowledge management principles also encourage using performance evaluation, compensation, awards, and recognition as incentives for participation in lessons learned programs. The lack of incentives in the Lessons Learned Program may lead to missed opportunities for the identification and sharing of lessons learned. The Lessons Learned Program has shared thousands of lessons among the five incineration sites through the different phases of construction, testing, and destruction of chemical agents. However, as the Chem-Demil Program evolved through the 1990s, and as the components using alternative technologies were added, the scope of the Lessons Learned Program did not expand to share lessons with the new components (see app. IV for a history of the Chem-Demil Program’s evolution). The Lessons Learned Program remained primarily focused on the five incineration sites. At the same time, each stockpile destruction component developed its own separate lessons learned, but without any program wide policies or procedures in place to ensure coordination or sharing of information across components. We reported in May 2000 that effective management of the Chem-Demil Program was being hindered by a complex organizational structure and ineffective coordination. This has created barriers to sharing. Today, the four sites that are likely to use alternative technologies are not full participants in the lessons learned effort: The Assembled Chemical Weapons Assessment Program does not fully participate in the lessons learned process or activities. In at least one instance, the Assembled Chemical Weapons Assessment Program requested (from the Program Manager for Chemical Demilitarization), a package of data including lessons on the pollution abatement system filters, mustard thaw, and cost estimates. The data were eventually provided, but they were too late to be used during a DOD cost data review. This lack of access forced the program to submit incomplete cost data for the review because it was unable to obtain information from the incineration project in a timely manner. The Alternative Technologies and Approaches Project does have access to the Lessons Learned Program’s database, and it plans to develop its own separate database that it will share with the Lessons Learned Program only at “key milestones.” The project’s information, however, could be very valuable to other components of the Chem- Demil Program, especially the Assembled Chemical Weapons Program, which also researches alternative technologies. This plan could lead to lost opportunities and duplication of efforts. Many of the lessons learned by the incineration project could be used by the other components of the Chem-Demil Program to promote safe, cost- effective, and on-time operations. Many of the technical processes (storing, transporting, unloading, and disassembling weapons) and programmatic processes (regulatory compliance, management, public relations practices) used by the Chemical Stockpile Disposal Project are very similar to those used by the other programs. This is also the case for processes used to develop operating destruction, or throughput, rates and cost and schedule projections. In fact, the majority of processes at incineration facilities are the same as those used by the Assembled Chemical Weapons Assessment Program and the Alternative Technologies and Approaches Project. Under these circumstances, promoting a culture of knowledge sharing would enable all components to capture and use organizational knowledge. Furthermore, there is the possibility that the Pueblo, Colorado, site (and possibly the Blue Grass, Kentucky, site) now managed by the Assembled Chemical Weapons Assessment Program, which now reports to a DOD office, may be transferred to the Army’s Chem-Demil Program. If this transfer of responsibilities does take place, it would be important for the two programs to be already sharing information fully and seamlessly. Even if the transfer does not take place, knowledge management principles and lessons sharing best practices both dictate that components of the same program should share information, especially if they all have a common goal. The Lessons Learned Program has made important contributions to the safe destruction of the nation’s stockpile of chemical weapons. We found that the program generally adheres to knowledge management principles and lessons sharing best practices. However, the program’s full potential has not been realized. The program needs guidance to help senior managers make decisions that allow them to weigh the potential impact of not implementing lessons learned. This guidance would be a set of procedures, including safety and risk analyses, to be followed before deciding to counter a lesson learned. Without such guidance, decision makers, in at least one case, chose lower cost over safety and schedule, ultimately at the expense of all three. Also, the Lessons Learned Program lacks procedures to validate the effectiveness of implemented lessons. The lack of a validation step partially defeats the purpose of the lessons learned process, which relies on the confirmed effectiveness of solutions emerging from knowledge and experience. If the effectiveness of a lesson cannot be validated over time, problems may emerge again, with a negative impact on safety, costs, and schedule. Further, the information in the lessons learned database is not easily accessible or prioritized. These drawbacks have frustrated users and may discourage them from using the database. This could lead to wrong or misinformed decisions that could affect safety. In addition, there is no overarching coordination or sharing of information across all the components of the Chem-Demil Program, which grew and evolved over time without policies or procedures to ensure that knowledge would be captured and communicated fully. As a result, fragmented or duplicative efforts continue today, and the Assembled Chemical Weapons Assessment Program in particular lacks access to important data maintained by the Chemical Stockpile Disposal Project and the Alternative Technologies and Approaches Project. In the case of the Chem-Demil Program, the absence of policies and procedures promoting and facilitating the broadest dissemination of lessons learned places the safety, cost effectiveness, and schedule of the chemical weapons destruction at risk. To improve the effectiveness and usefulness of the Chemical Demilitarization Program’s Lessons Learned Program, we recommend that the Secretary of Defense direct the Secretary of the Army to develop guidance to assist managers in their decision making when making exceptions to lessons learned, develop procedures to validate, monitor, and prioritize the lessons learned to ensure corrective actions fully address deficiencies identified as the most significant, and improve the organizational structure of the database so that users may easily find information and develop criteria to prioritize lessons in the database. We also recommend that the Secretary of Defense direct the Secretary of the Army to develop policies and procedures for capturing and sharing lessons on an ongoing basis with the Alternative Technology and Approaches Project and in consultation with the Under Secretary of Defense (Acquisition, Technology, and Logistics) develop policies and procedures for capturing and sharing lessons on an ongoing basis with the Assembled Chemical Weapons Assessment Program. The Army concurred with our five recommendations and provided explanatory comments for each one. However, these comments do not address the full intent of our recommendations. With regard to our recommendation that it provide guidance to assist managers when deciding to make an exception to a lesson, the Army stated that the Lessons Review Team has guidance for characterizing the severity level of lessons learned. However, as our report clearly points out, this guidance is for site officials and is insufficient in assisting senior managers at headquarters on important decisions involving costly lessons that could potentially impact several sites. We believe that good management practices require that senior managers make decisions based on risk, safety, and cost analyses and that guidance should be developed to support this decision-making process as we recommended. In concurring with our recommendation to develop procedures to ensure corrective actions fully address deficiencies, the Army stated that it is initiating an effort whereby the system’s contractors will be responsible for validating, monitoring, and prioritizing lessons. The Army’s Lessons Learned Program currently does not validate the results of corrective actions. Contracting this important function will require monitoring by the Chem-Demil program to ensure that validation is properly conducted as we recommended. The Army stated that it has improved the Lessons Learned database to make it easier to locate information. Converting the database to an Internet-based program should also improve its accessibility and utility. Although these actions address some users’ concerns, the Army needs to address all related user issues identified in our report in order to improve the benefits of the database. The Army concurred with our recommendation to develop policies and procedures to capture and share lessons with the two alternative technology programs. It stated that progress had been made toward sharing lessons between the Alternative Technologies and Approaches Project and the Lessons Learned Program at key milestones. The Army also said it has shared the lessons database with the Assembled Chemical Weapons Assessment Program. However, the Army should require, as we recommended, that policies and procedures for capturing and sharing lessons on an ongoing basis be established, instead of sharing at key milestones and on a one-way basis. This approach would ensure that both alternative technology programs fully participate in the Lessons Learned Program and that the database is constantly enriched to enhance safety, cost, and schedule based decisions for all components of the Chem-Demil program. The Army’s comments are printed in appendix V. The Army also provided technical comments, which we incorporated where appropriate. We are sending copies of this report to interested congressional committees, the Secretaries of Defense and of the Army; the Assistant Secretary of the Army (Installations and Environment); the Under Secretary of Defense (Acquisitions, Logistics, and Technology); the Director, Federal Emergency Management Agency; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov Please contact me at (202) 512-6020 if you or your staff have any questions regarding this report. Key contributors to this report were Donald Snyder, Bonita Oden, Pamela Valentine, Steve Boyles, and Stefano Petrucci. There have been three releases of agent from operating incineration facilities and one incident during construction that have generated several lessons learned. The incineration process and the releases and construction incidents are described below. A baseline incineration process uses a reverse-assembly procedure that drains the chemical agent from the weapons and containers and takes apart the weapons in the reverse order of assembly. Once disassembled, the chemical agent and weapon parts are incinerated in separate furnaces and the gaseous and solid waste is treated in a separate process. Liquid brine resulting from the treatment of exhaust gases in the pollution abatement system is dried to reduce the volume and transported to a commercial hazardous waste management facility. The path to weapons disposal, in general includes six major steps. 1. Chemical weapons are stored in earth-covered, concrete-and steel buildings called igloos. These igloos are guarded and monitored for any signs of leaking weapons by the U.S. Soldier and Biological Chemical Command. 2. Chemical weapons are taken from the igloos and transported to a disposal plant in sealed on-site containers by the U.S. Soldier and Biological Chemical Command. The sealed containers are resistant to fire and impact. 3. When the on-site containers arrive at the disposal plant, workers check them for leaking weapons before opening them. Chem-Demil crews then load the weapons onto conveyors that carry the weapons through the disposal process. When the weapons are loaded onto the conveyor, the U.S. Soldier and Biological Chemical Command no longer has responsibility for them. 4. From this point on, workers manage the disposal process from an enclosed control room using advanced robotics, computer technology, and video monitoring equipment. Automatic, robotic equipment drains the chemical agent from the weapon and takes the weapons apart in explosive proof rooms. 5. Once dismantled and drained, the individual weapon parts travel to different furnaces in the plant, each designed for a specific purpose. The liquid incinerator destroys the chemical agent, the deactivation furnace destroys explosive materials, and the metal parts furnace heats shell casings and other heavy metal parts to destroy any remaining agent contamination. 6. The pollution abatement system cleans the air before it is released into the environment. The Tooele Chemical Agent Disposal Facility (Tooele plant) is located on Deseret Chemical Depot in Tooele, Utah. The facility is designed to dispose of 44.5 percent of the nation’s original stockpile of chemical weapons. Tooele plant is the first chemical weapons disposal facility built within the continental United States. Construction of the Tooele plant began in October 1989 and disposal operations began in August 1996. Operations at Tooele plant should be completed in 2008. The Tooele plant incorporates systems originally tested and used at the Chemical Agent Munitions Disposal System, also located at the depot. These systems were first used on an industrial scale at the Army’s Johnston Atoll Chemical Agent Disposal System (Johnston Atoll plant) in the Pacific Ocean. The Johnston Atoll plant was the first integrated facility built to dispose of chemical weapons. The sequence of events described in table 3 is based on documents from the Utah Department of Environmental Quality—Division of Solid and Hazardous Waste, U.S. Army Safety Center, Department of Health and Human Services—Centers for Disease Control and Prevention, and a program contractor. On May 8, 2000, the day shift was processing rockets in the deactivation furnace system. The deactivation furnace system lower tipping gate (used to control the feed of munitions to the furnace) did not close properly and munitions/agent processing was terminated. Workers in protective gear began to clean and repair the gate and a strainer. A bag from the strainer, contaminated with GB (nerve) agent, was left on top of the gate. This is believed to be the source of the agent that was released. Vapors were drawn from the bag through the furnace system. During the initial attempt to re-light the afterburners following the cleaning procedure, the agent monitoring equipment alarmed. During a second attempt to re-light these burners another agent monitor alarmed. In summary, a small amount of agent escaped through the common stack during attempts to relight the furnace. (See table 2.) The several corrective actions taken were based on 105 investigation findings involving operations, training, and equipment. Lessons learned from this incident include (1) modifying feed chute clean out procedures, (2) providing operator refresher training, (3) installing a deactivation furnace remote operated valve to isolate the deactivation furnace during afterburner re-lights, and (4) redesigning deactivation furnace feed chute. In addition to reviewing lessons from Tooele incidents, we were briefed on two incidents that occurred at Johnston Atoll, and we reviewed relevant investigation reports for these incidents. Both incidents resulted in corrective actions and generated several lessons learned. On March 22, 1994, the liquid agent gun purge process began. The next day workers dressed in protective gear removed the liquid agent gun, and three lines had to be disconnected and capped (sealed). These three lines to the liquid agent gun are the atomizing air, fuel oil, and the agent line. During the disconnecting of the agent line, the liquid incineration room agent monitoring system alarmed. Also, the agent monitors in the common stack began to alarm. Operators turned off the induction fan to divert room air out through plant exhaust to the carbon filters. Lessons learned from this incident include (1) replacing the fuel oil purge system flow meter with an instrument that could be read in the control room; an investigation found that the flow meter on the agent purge line was not functioning (2) directing room air away from the pollution abatement system to prevent contaminated air from escaping through the duct work without going through the furnace and (3) counseling workers on the importance of following approved standard operating procedures. On December 8, 1990, a laboratory analysis confirmed emission of chemical agent from the common stack following a purging (flushing) of the agent line. It was determined that the probable cause of the release was that a quantity of agent GB (nerve) leaked from the agent gun or feed line into the primary chamber of the liquid incineration furnace, and the agent was swept downstream by the induced draft fan (used to draw air through the plant) while the furnace was in a cool-down cycle. It appears that the agent that leaked into the incinerator and ultimately discharged to the atmosphere was from either valves in the agent feed line to the primary chamber that were not totally sealed or the agent remained in the agent line after it was purged and was aspirated into the incinerator and subsequently the atmosphere. During the incident, and due to a malfunctioning agent-sampling probe, the agent-monitoring equipment in the common stack did not detect agent. Lessons learned from this incident include (1) improving the process to purge (flush) chemical agent from the feed line by adding a fuel oil purge and increasing the purge cycle to ensure a complete purge, (2) modifying the alarm system in the common stack to provide redundancy and test the alarms more frequently, and (3) closing all four valves after the agent line is purged and process activities involving the liquid incineration feed system when the furnace is cooling down to the charcoal filters. On September 15, 1999, more than 30 construction workers were affected by an irritating vapor in the air while working in the munitions demilitarization building. This incident caused many workers to experience respiratory irritation, sending them to the local hospital where they were examined and released. Later that day, all construction work stopped and approximately 800 contracted workers were sent home. Investigations and analyses lead to the determination that chemical agent was not involved; instead this was determined to be a construction incident. As construction progressed, the building became a “closed-in” area and may not have been adequately ventilated. The building ventilation system was not designed to control contaminants during construction; it was only intended to control a release of chemical agent when construction was complete and operations had begun. The release of 800 contracted- construction workers without informing them of the situation that no chemical agents were involved, coupled with the slow release of information to the press, eventually heightened public concern. Lessons learned from this incident include (1) enhancing local ventilation in the munitions demilitarization building, (2) establishing and posting evacuation routes and response procedures throughout the site, (3) installing a temporary public address system at the construction site, and (4) ensuring there is adequate communications between the site and any off-site facilities particularly in the event of an incident. On July 15, 2002, at the time we were drafting this report, an individual working at the incineration facility in Tooele, Utah, experienced a confirmed accidental chemical agent exposure. This individual was performing maintenance on an agent purge line valve in the liquid incinerator room and was exposed to residual agent present in the agent purge line. The worker exhibited symptoms of chemical agent exposure. Although the Army, DOD Inspector General, and the facility’s contractor are conducting investigations looking into the events associated with the accidental exposure, it is too early to report on lessons resulting from this incident. The Program Manager for Chemical Demilitarization is awaiting the investigation reports and will incorporate the corrective actions into lessons learned. According to the Army, agent operations will not commence until all corrective actions have been taken and the plant is deemed safe to operate. To assess the Lessons Learned Program, we reviewed literature on the principles of knowledge management and our previous reports on lessons sharing best practices. To assess the leadership of the Lessons Learned Program, we interviewed Chem-Demil Program managers, personnel, and the contractor staff who manage the Lessons Learned Program. We also reviewed management documents describing the program and we conducted 30 structured interviews with the Chem-Demil Program’s managers (headquarters and field level) and systems contractor staff at three sites (Aberdeen, Maryland; Anniston, Alabama; and Tooele, Utah) to determine how clearly management articulated its expectations about using lessons learned. We did not select a statistical sample of database users; therefore, our survey results cannot be generalized to all Lessons Learned Program database users. To describe the lessons learned process, we reviewed documentation relevant to the lessons learned process. We also interviewed personnel from the office of the Program Manager for Chemical Demilitarization, the Anniston, Alabama, site, and the contractor responsible for managing the Lessons Learned Program. To learn how technology supports the Lessons Learned Program, we reviewed the lessons learned process and identified the methods used to gather, consolidate, and share information with stakeholders. We also asked the staff we surveyed how effectively does the program’s technology tools support the lessons learned process. To determine whether the Chem-Demil Program fosters a culture of knowledge sharing and use, we talked to program managers for each Chem-Demil Program components, headquarters staff, and personnel from the lessons learned contractor staff to determine how lessons are shared and whether employees are encouraged to participate in the program. We also asked the staff we surveyed how frequently they submitted information to the program, whether they used the lessons, and whether there were incentives to encourage participation. To determine whether lessons learned contributed to the goals of the destruction program; we documented and reviewed several important lessons that program staff identified. We also traced several lessons from incidents at Johnston Atoll and Tooele to verify that they had been shared and implemented at the Anniston facility. We used unverified Army data to assess whether the Lessons Learned Program achieved its aim of reducing or avoiding unnecessary costs. To determine if the Lessons Learned Program process conforms to other programs’ lessons sharing processes we identified four of a number of federal organizations that practice knowledge management and operate lessons learned programs. In making our selections, we reviewed literature and spoke with knowledge management experts to find organizations recognized for their ability to share lessons learned or effectively manage knowledge. We obtained information from the Center for Army Lessons Learned, the Department of Energy, the U.S. Army Corps of Engineers, and the Federal Transit Authority. We interviewed representatives from each organization about the processes they used for identifying, collecting, disseminating, implementing, and validating lessons learned information. We reviewed their lessons learned program guidance to compare and contrast their practices with the incineration project’s Lessons Learned Program process. We also interviewed an expert familiar with the program about the management of the lessons learned process. To assess the search, linkage, and prioritization of the database, we obtained documentation and interviewed the contractor staff about the information in the database. We tested the search feature of the database, including accessing menus, keyword and category listings, and analyzed several lessons learned we had obtained from our searches. We obtained opinions from the staff we surveyed on the effectiveness of the lessons learned database and their suggested areas of improvement. The respondents included managers and others with an average of 9 years experience in the Chem-Demil Program. The staff we surveyed routinely search the database for lessons learned information. We did not select a statistical sample of database users; therefore, our survey results cannot be generalized to all Lessons Learned Program database users. To assess the extent to which lessons learned have been shared, we interviewed the Program Manager for Chemical Demilitarization and the contractor responsible for operating the Lessons Learned Program. We also attended status briefings for each Chem-Demil component. We focused our work primarily on the stockpile destruction projects/programs. We conducted interviews with officials from the Alternative Technologies and Approaches Project, the Assembled Chemical Weapons Assessment Program, and the Chemical Stockpile Disposal Project to gather evidence on the commonality the alternative technology components have with the incineration program and the extent to which they share lessons learned information. To determine whether each component participated in the Lessons Learned Program by either sharing or receiving lessons learned information, we reviewed workshop minutes from calendar years 2000 and 2001. To describe the incidents at three sites, we attended briefings on the incidents provided by officials from the incineration program, and reviewed incident investigation reports and entries in the Lessons Learned database. We identified key lessons from these sources and toured the Anniston Chemical Disposal Facility, to determine whether lessons learned had been shared and implemented. During our visit, we observed that several lessons from the Tooele incident, among others, were implemented. The Lessons Learned Program was established to collect and share lessons learned within the incineration program. The Programmatic Lessons Learned Program uses various methods to identify, review, document, and disseminate lessons learned information among government and contractor personnel. The program uses facilitated workshops to introduce lessons and also takes lessons from engineering change proposals. The Lessons Review Team reviews issues and determines specific lessons to be implemented. These issues, engineering changes, and lessons are stored in a database. The program uses five distinct steps to develop lessons learned, as shown in figure 2. Issues are raised through topics submitted to workshops (meetings of headquarters and site personnel), critical document reviews (of changes to program documents), engineering change proposals (technical changes at one or more sites), quick reacts (immediate action), and express submittals (information from a site.) Experts review issues to determine if a change should be initiated in a workshop, an assessment (a study to support a management recommendation for change), engineering change proposal review process (a team at each site reviews changes at other sites), and directed actions (requests for information on actions a site has taken.) Lessons are identified from workshops, assessment reports, and the lessons review team (headquarters activity to segregate lessons into response required or not required.) Issues and lessons are stored in the database. Lessons are then shared with stakeholders, including contractor personnel, through access to the database, technical bulletins (a quarterly publication with information of general interest to multiple sites), programmatic planning documents (containing policies, guidelines, management approaches, and minimum requirements), and site document comparisons (new documents with baseline documents.) Four primary elements of these steps are discussed below. Facilitated workshops are the primary method for introducing lessons learned into the Lessons Learned Program. Facilitated workshops are meetings that offer an environment conducive for site and headquarters personnel to speak openly about experiences. The intent of the workshops is to allow program personnel familiar with particular subjects to hold detailed discussions of issues relative to specific subjects. All issues discussed in the workshops are entered into the database and later reviewed to determine if the issues should become lessons learned. The facilitated workshop process begins with a memorandum that requests site personnel from the Lessons Learned Program team to identify topics they want to discuss in workshops. These topics are generally divided into three basic categories: (1) valuable information provided to other sites, (2) challenging issues and discussion of issues with other sites in anticipation of possible recommendations, and (3) general topics to discuss different approaches to a problem. After each workshop, a feedback survey is sent to participants to determine user satisfaction with workshops. Engineering change proposals are the primary method of approving and documenting design changes at the sites. Members of the Configuration Control Board and the Field Configuration Control Boards are responsible for reviewing and approving engineering change proposals within certain dollar limits. The Configuration Control Board, consisting of members from headquarters, is also responsible for managing changes to items or products identified for configuration control, such as facilities and equipment in order to maintain or enhance reliability, safety, standardization, performance, or operability. Each Field Configuration Control Board consists of members from a site, and is responsible for controlling engineering changes during construction, systemization, operations, and closure of facilities. Engineering change proposals are discussed during bi-weekly teleconferences where the sites can ask the originating site questions about the proposed engineering change. The Field Configuration Board is responsible for approving engineering change proposals with an estimated cost of $200,000 or less. The Configuration Control Board is responsible for approving proposals with an estimated cost of $200,001 to $750,000. Proposals over $750,000 are sent to the Project Manager for Chemical Stockpile Disposal for approval. After approval, the engineering change proposals are reviewed and input into the database and sent to the Lesson Review Team as part of the review process. Engineering changes are the primary source of design-related lessons learned. Engineering change proposals are approved changes in the design or performance of an item, a system or a facility. Such changes require change or revision to specifications, engineering drawing, and/or supporting documents. Consequently, the Program Manager for Chemical Demilitarization developed a review process as a method to capture these lessons in the Lessons Learned Program. The purpose of the Engineering Change Proposal Review Process is to provide Chemical Demilitarization sites with more control over lessons learned decisions and incorporate lessons learned sharing under the Lessons Learned Program. Additionally, the review process is structured to allow each site the opportunity to review engineering changes being implemented at other sites and consider the applicability to their site. The review team consists of members from the sites, the Program Manager’s office, the Lessons Learned Program team, and the U.S. Army Corps of Engineers. The Lessons Review Team, established in September 2001, is responsible for reviewing issues discussed in facilitated workshops to determine their potential impact and to determine if a specific site action is required. Additionally, the review team reviews engineering change proposals to determine if they are design-related lessons learned. Issues are considered “lessons learned” when they have programmatic interest and significant impact on safety, environmental protection, or plant operations. The Lessons Review Team designates lessons learned as mandatory, “response required,” and “response not required.” A lesson is mandatory if the method of implementation has been or is directed from the Program Manager for Chemical Demilitarization headquarters. A lesson that is characterized as “response required” for means that the given site must provide information to the home office on the action taken to address the lesson. “Response not required” means that the site is not required to provide information to the headquarters on the action that the site has taken. For mandatory lessons, the Lesson Review Team decision makers provide specific guidance for implementation of lessons. Technical support staff on the team conducts lesson reviews and provides recommendations to the decision maker regarding lessons. A team member is responsible for initial review of lessons and recommended designation, distribution of materials before the meetings, and facilitation of the meetings. The Lessons Learned Program database is a repository for (1) issues generated from facilitated workshops, (2) engineering change proposals, (3) critical document reviews, (4) quick react/advisory system and other lessons learned process data, and (5) programmatic and design lessons learned. As of April 2002, the database contained 3,400 issues, 7,630 directed action, and 3,055 engineering change proposals. The database was developed as a stand-alone program allowing users to employ search utilities or category trees to retrieve lessons. The program opens to the main screen, which consists of a search, categories, and lessons screens. The lessons screen is a search mechanism that utilizes a “drop down menu” enabling users to locate lessons by selecting categories or subcategories to narrow the search for lessons in a specific area. To summarize information and identify lessons in the lessons learned database, the database contains background information to support each lesson. The background information provides a condensed history, as well as the status of each lesson at the Chemical Demilitarization site. The Departments of Defense and the Army made several changes to the management structure of the Chem-Demil Program, principally in response to congressional legislation. Originally the Program Manager for Chemical Demilitarization reported directly to the Assistant Secretary of the Army (Installations and Environment), who also oversees storage of the chemical weapons stockpile. The U.S. Soldier and Biological Chemical Command manages the stockpile. The Command also manages the loading, delivery, and unloading of chemical weapons at the destruction facility. After the estimated cost of the program reached a certain dollar amount, as required by statute, the Army formally designated it a major defense acquisition program. To manage this program in the Army acquisition chain, it was then transferred to the Assistant Secretary of the Army (Acquisition, Technology, and Logistics). The Program Manager for Chemical Demilitarization continued executing the program. In 1997, the Chemical Stockpile Emergency Preparedness Program was removed from the Program Manager for Chemical Demilitarization and transferred back to the Assistant Secretary of the Army (Installations and Environment) where it is currently managed by the U.S. Soldier and Biological Chemical Command. Also in 1997, the Army and the Federal Emergency Management Agency signed a new memorandum of agreement to better manage the on- and off-post emergency response activities, respectively. In the 1997 Defense Appropriations Act (sec. 8065), Congress required the Assembled Chemical Weapons Assessment Program be independent of the Program Manager for Chemical Demilitarization and report directly to the Under Secretary of Defense (Acquisition and Technology). The purpose of this legislation was to separate the pilot program from the baseline incineration activities. Achievement of this goal also meant that two program offices would share responsibilities associated with disposal activities in Kentucky and Colorado. However, the pilot program’s legislation does not specifically state whether or not the Program Manager for Chemical Demilitarization will manage the assessment program once the development of technology evaluation criteria, the technology assessment, the demonstration, and pilot phases end. In May 2000, we reported on the fragmented management structure and the inadequate coordination and communication within the Chem-Demil Program. We recommended that the Army should clarify the management roles and responsibility of program participants and establish procedures to improve coordination among the program’s various components. The Army, in December 2001, transferred the Chemical Demilitarization Program to the Assistant Secretary of the Army (Installations and Environment), bringing all components of the program, except the Assembled Chemical Weapons Program, under a single Army manager, as shown in figure 3. Another significant management change occurred in April 2002 when the Program Manager for Chemical Demilitarization retired after holding this position for the past 5 years.
The Army has been tasked to destroy 31,500 tons of highly toxic chemical agents by April 2007, the deadline set by an international treaty for the elimination of all chemical weapon stockpiles. To destroy the weapons, the Department of Defense (DOD) established the Army Chemical Demilitarization Program. The Army has destroyed over one-quarter of the U.S. stockpile as of March 2002. Originally, the Chem-Demil Program consisted only of the Chemical Stockpile Disposal Project, which was initiated in 1988 to incinerate chemical weapons at nine storage sites. In response to public concern about incineration, in 1994 Congress established the Alternative Technologies and Approaches Project to investigate alternatives to the baseline incineration process. The Chemical Stockpile Disposal Project operates a Programmatic Lessons Learned Program whose aim is to enhance safety, reduce or avoid unnecessary costs, and maintain the incineration schedule. This program has successfully supported the incineration project's primary goal to safely destroy chemical weapons and has captured and shared many lessons from past experiences and incidents. However, the Lessons Learned Program does not fully apply generally accepted knowledge management principles and lessons sharing best practices, thereby limiting its effectiveness. The program's management plan does not provide policy guidance for senior managers to help them in decision-making or daily operations. In addition, it does not have formal procedures to test or validate whether a corrective action has been effective in resolving its deficiency. Finally, the lessons learned database is difficult to search and does not prioritize lessons. The Lessons Learned Program has been effective in sharing knowledge among the different stakeholders within the Chemical Stockpile Disposal Project. However, as new components were created to destroy the stockpile, the scope of the Lessons Learned Program remained primarily limited to the incineration project. As a result, some components that could greatly benefit from timely and full sharing of lessons learned with the incineration project are not doing so.
Over the last decade, DOD space system acquisitions have been characterized by the long-standing problem of program costs increasing significantly from original cost estimates. While some programs have overcome problems with development, and actions have been taken to better position programs for success, the large cost growth of space systems continues to affect the department. As shown in figure 1, as of December 2014, current annual estimated costs for selected major space system acquisition programs have overrun and are projected to exceed original annual estimates by a cumulative $16.7 billion—186 percent— over fiscal years 2014 through 2019. The cost increases that DOD is dealing with today are partly the result of management and oversight problems, many of which DOD experienced before 2010. Other reasons for cost increases include quantity increases and extensions for some programs, such as the Evolved Expendable Launch Vehicle (EELV) program. The gap between original and current cost estimates represents money the department did not plan to spend on the programs, and thus could not invest in other efforts. Gaps between original and current estimates over the 6-year period are slightly larger for some years. For example, the gaps in 2016 and 2017 are in large part driven by significant annual cost increases for the EELV program. Specifically, original annual estimates for EELV were over $900 million and $600 million for 2016 and 2017, respectively, but grew to over $2 billion for each year in the current annual estimates. EELV cost increases are due primarily to an increase in the number of expected launch services by 60 and an extension of the program for 10 more years, in addition to increases in the cost of acquiring launch services, which have recently been stemmed. Three programs—Global Broadcast Service, Space Based Infrared System (SBIRS), and Wideband Global SATCOM—did not estimate any annual costs for fiscal years 2014 through 2019 originally, but current cumulative estimates for these programs over that time frame total about $4 billion, driven by cost growth and quantity increases. The overall declining investment in fiscal year 2019 is in part the result of programs that have planned lower out-year funding as they approach the end of production or operational capability. However, this decline is mitigated by plans to invest nearly $2.5 billion in launch services in 2019, and will be further mitigated by new programs, which are still in the early stages of planning and development. These new programs are not included in this figure because they have not yet established official cost baselines. Our prior body of work has identified a number of causes of acquisition problems in DOD programs. In the past, DOD tended to start more weapon programs than was affordable, creating competition for funding that focused on advocacy at the expense of realism and sound management. In addition, DOD tended to start space system acquisition programs before it had the assurance that the pursued capabilities could be achieved within available resources and time constraints. There is no way to accurately estimate how long it takes to design, develop, and build a satellite system when key technologies planned for that system are still in the relatively early stages of discovery and invention. Finally, programs have historically attempted to satisfy all requirements in a single step, regardless of the design challenges or the maturity of the technologies necessary to achieve the full capability. DOD’s past inclination to make large, complex satellites that perform multiple missions has stretched technology challenges beyond current capabilities, in some cases. To address the problems identified, we have recommended that DOD take a number of actions. Broadly, we have recommended that DOD separate technology discovery from acquisition, follow an incremental path toward meeting user needs, match resources and requirements at program start, and use quantifiable data and demonstrable knowledge to make decisions about moving to next acquisition phases. We have also identified practices related to cost estimating, program manager tenure, quality assurance, technology transition, and an array of other aspects of acquisition program management that could benefit space system acquisition programs. DOD has generally concurred with our recommendations and has undertaken a number of actions to establish a better foundation for acquisition success. For example, we reported in the past that, among other actions, DOD created a new office within the Office of the Undersecretary of Defense for Acquisition, Technology and Logistics to focus attention on oversight for space programs and it eliminated offices considered to perform duplicative oversight functions. We have also reported that the department took actions to strengthen cost estimating and to reinstitute stricter standards for quality. Most of DOD’s major space programs are in the mature phases of acquisition and are now producing and launching satellites. Cost and schedule growth—a significant problem for these programs in past years—is not currently as prevalent, though still a problem for some programs. Table 1 describes the status of the space system acquisitions we have been tracking in detail. Several of DOD’s space system acquisitions have largely overcome challenges—such as matching resources to requirements, facilitating competition, and parts quality issues—and are in the process of producing and launching satellites. Other programs, however, continue to experience challenges, both in technology development and the optimal timing of system component deliveries, meaning delivery of ground, space, and end user assets may not be synchronized. When satellites are placed on orbit without corresponding ground systems and limited user equipment in place, their capability is effectively wasted, as a portion of their limited lifespan is spent without being fully utilized. This has been a significant problem for DOD given the high cost to develop satellites, systemic delays in delivering ground and user components, and the importance of maintaining continuity of service. Problems we have identified with the development of satellites, ground systems, and user components are highlighted below. In September 2010, we found that the Global Positioning System (GPS) III satellite program took a number of steps to avoid past problems with GPS satellite acquisitions, such as adopting higher quality standards and better managing requirements. However, in our more recent work from March 2015, we found that the first GPS III satellite launch is facing a significant delay due to problems with the development of its navigation payload. The payload has now been delivered, but the first launch has been delayed 28 months. The program office reports that early testing of a satellite prototype helped identify problems sooner, but a complete GPS III satellite has yet to be tested. As a result, additional issues could emerge. Though the program had taken steps to include in its initial cost and schedule estimates the impacts of addressing problems in development, it is now rebaselining those estimates—expected to be completed in July 2015—as a result of this delay and associated increased costs. The first satellite design will not be fully tested until May 2015 at the earliest; meanwhile, an additional 7 satellites are in various stages of production and DOD has authorized an additional 2 satellites to be acquired. Additional delays or problems discovered during tests of the first satellite could require rework to the remaining satellites in production—carrying the risk of further cost growth. In our ongoing work, we are finding that the GPS Next Generation Operational Control System (OCX), the next ground system for GPS, has experienced significant schedule delays and cost growth, and is still encountering technical challenges. The program awarded a development contract in February 2010, nearly 3 years before the formal decision to begin system development, when requirements are to be matched with resources. The contract was awarded early in order to save money during the competitive phase, but the contractor encountered problems completing software engineering and implementing cybersecurity requirements, among other things, which led to a higher-than-expected level of defects in the software, and ultimately to significant rework and code growth. Significant work and risk remain in the development of key upgrades, which are expected to be delivered about 4 years later than planned. This means some satellite capability will likely go unutilized for several years while the capability of the ground system catches up to the functionality of the satellites. Further, as of April 2015, contract costs have more than doubled over initial estimates, from $886 million in February 2010 to $1.98 billion, and DOD has delayed initial OCX capability from 2015 to 2019. The Office of the Director, Operational Test and Evaluation has stated that the delays to OCX pose risks to DOD’s ability to sustain the operational GPS constellation, since DOD may require use of the GPS III satellites before OCX is available to control them. We are examining OCX in greater detail, as mandated by this committee, and expect to report on the results of our review in July 2015. Through our ongoing work, we are finding that DOD, building on a troubled, lengthy 9-year effort to mature military-code (M-code) receiver technology, initiated the Military GPS User Equipment (MGUE) program in 2012 to develop M-code receiver cards for the military services’ respective ground, air, and sea weapon systems. In 2014, the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics directed the Air Force to accelerate MGUE development and fielding, as guided by a 2011 statutory requirement instructing DOD to procure only M-code capable equipment after fiscal year 2017. To that end, DOD expects to complete developmental testing and operational assembly by July 2016 and provide technical support to inform the military services’ MGUE production decisions. However, MGUE integration and performance risks will not be fully known until the military services can complete individual operational tests on their respective test platforms. The completion of the first of those tests is scheduled for September 2017, and the last in September 2019. We are reviewing these and other issues within the MGUE program as part of our ongoing work, and plan to report to your committee on the program in July 2015. In March 2015, we reported that the Family of Advanced Beyond Line- of-Sight Terminals (FAB-T) program, which is to deliver user terminals for the AEHF satellite system and is a vital component of nuclear command and control operations, is nearing the end of development and anticipates entering production in late fiscal year 2015. In 2012, following 10 years of continued cost and schedule growth developing FAB-T, DOD competed and awarded a contract to develop a new design for the program. At the time of our last review, the low-rate production decision was expected in September 2014, but due to delays in completing hardware qualification and system level testing, among other things, the decision is now expected almost a year later, in August 2015. The SBIRS ground system, which provides command and control and data processing support, continues to experience development delays. The delayed delivery of the initial block of the ground system—intended to facilitate processing of integrated data from legacy Defense Support Program satellites, SBIRS GEO satellites, and SBIRS sensors in highly elliptical orbit—means complete and usable data from a critical sensor will not be available until June 2016, based on DOD’s December 2014 Selected Acquisition Report, over 5 years after the first SBIRS GEO satellite was launched. The SBIRS system will not be fully operational until 2018, when the final block of the SBIRS ground system—which adds processing capability to mobile ground terminals—is expected to be completed. The Mobile User Objective System (MUOS) program also faces challenges that prevent full use of its satellite capabilities. Issues related to the development of the MUOS waveform—meant to provide increased communications capabilities beyond those offered by the legacy system—have caused delays in the use of radios being developed by the Army as the first operational terminals to incorporate the waveform, as we reported in March 2015. Use of over 90 percent of MUOS’ planned capability is dependent on resolving problems with integrating the waveform, terminals, and ground systems. The MUOS program extended testing to fix software and reliability issues with the waveform integration and now plans to complete operational testing by November 2015—a 17-month delay from the initial schedule estimate. As a result, the Army’s plans to field its MUOS-compatible radios have now slipped from 2014 to 2016, roughly four years since the first MUOS satellite launched. Fiscal constraints and growing threats to space systems have led DOD to consider alternatives for acquiring and launching space-based capabilities. These include disaggregating—or breaking up—large satellites into multiple, smaller satellites or payloads, and introducing competition into the acquisition of launch services. For some mission areas, such as space-based environmental (or weather) monitoring, protected satellite communications, and overhead persistent infrared sensing, decisions on the way forward, including satellite architectures, have not yet been made. For others, such as national security space launch, plans have been decided, but implementation poses new challenges. As DOD moves forward with changes to the acquisition approaches for these mission areas, some with the potential to set off cascading effects, strong leadership across DOD’s space programs will be critical. In 2014, we examined DOD’s efforts to explore disaggregation as a potential means to provide space-based capabilities in an increasingly constrained budget environment and a threatened space environment. We found that the effects of disaggregation are largely unknown, and that, at the time of our review, DOD had not comprehensively assessed the wide range of potential benefits and limitations in key areas, such as affordability, capability, and resilience. Consequently, we recommended DOD conduct a comprehensive examination of disaggregation, develop common measures for resilience, and expand demonstration efforts to assess its feasibility, before making decisions on whether to disaggregate its space systems. DOD generally agreed with our recommendations. One way DOD is assessing disaggregation is through various analyses of alternatives (AOA), or reviews that compare the operational effectiveness, suitability, and life cycle cost of solutions to satisfy capability needs. DOD has completed one AOA for the weather monitoring mission area and is working to complete others for protected satellite communications and overhead persistent infrared sensing. These AOAs have the potential to dramatically shift DOD’s approach to providing capabilities, affecting not only satellite design, but also ground systems, networks, user equipment, and the industrial base. To allow enough time to institute potential changes in contracting and development approaches, and maintain continuity of service, DOD is faced with making decisions over the next several years about the way forward. Details about the AOAs, including when acquisition decisions need to be made for follow-on systems, are depicted in table 2. The longer DOD takes to complete the AOAs and come to a consensus on how to proceed, the more its range of choices will be constrained. Completing an AOA is the first in a series of important steps to providing future capabilities. In addition, an approach must be selected—whether it is a disaggregated architecture, an evolved version of an existing system, or some other variation—funding must be programmed, and technology development and acquisition strategies must be developed. If decisions are not timely, DOD may be forced to continue with existing approaches for its next systems, effectively continuing with legacy systems. While doing so may offer benefits, such as lower likelihood of unexpected cost and schedule problems, there are also risks associated with technology obsolescence and likely continued risks to missions because of the threats satellites on orbit face today. To date, DOD has not positioned itself to implement significant changes into follow-on systems. For example, in a January 2014 assessment of a DOD report on overhead persistent infrared technology, we noted that although the need for a SBIRS follow-on AOA was determined in 2008, it was not pursued until early 2014, at which time DOD began planning efforts for the AOA study. Additionally, we reported in April 2015 that DOD’s minimal investment in planning for technology insertion on SBIRS GEO satellites 5 and 6 limited the options available to upgrade technologies on these satellites. At present, DOD is planning to address technology obsolescence issues rather than upgrade the onboard technologies. Additionally, we concluded in April 2015 that the current lack of direction in the program’s path forward could make it difficult to develop a technology insertion plan before the next system is needed. DOD concurred with our recommendation to establish a technology insertion plan that identifies specific needs, technologies, and insertion points, to ensure planning efforts are clearly aligned with the follow-on system and that past problems are not repeated. In the case of the weather monitoring mission area, DOD has passed the point where it could consider new designs or approaches for certain capabilities. At least one capability has an immediate need, requiring DOD to choose among existing approaches. It is not certain disaggregation is a good approach. To the extent it may offer a viable option for addressing the affordability and resilience challenges that DOD is facing, it is not a simple solution and should be decided on a case-by-case basis. The changes to satellite designs that are being contemplated could have far-reaching effects on requirements, supporting infrastructure, management and oversight of acquisitions, industry, and other areas. DOD is taking good steps by assessing alternatives thoroughly, but, as our work has found, it has not yet resolved underlying challenges to space acquisition that could be exacerbated by disaggregation. For example, disaggregating satellites may require more complex ground systems and user terminals. However, we consistently find ground systems and user equipment programs are plagued by requirements instability, underestimation of technical complexity, and poor contractor oversight. Clearly, these problems, including the misalignment of satellites, ground systems, and user equipment, will pose challenges to successfully implementing a disaggregated approach. Once decisions are reached for future satellites and launch acquisitions, DOD may still face hurdles in implementing the plan. For example, in 2012, DOD made a commitment to introduce competition into its EELV program—a shift of dramatic proportions from the longstanding status quo of procuring launch services through a sole-source provider. Following this decision, the department, in coordination with the National Aeronautics and Space Administration, the National Reconnaissance Office, and several private space launch companies, has been working to certify new launch providers for national security space launches, but to date, none have met the criteria to become certified, although DOD expects SpaceX to be certified by June 2015. Additionally, the department has faced unexpected complications, such as challenges to its competitive process in the form of a private lawsuit that has been settled, according to DOD officials; a foreign conflict that brought attention to a Russian engine used on one of the sole-source contractor’s launch vehicles; and engine development demands requiring new technological innovation. Without addressing leadership shortcomings, DOD space programs could continue to face challenges in implementing new approaches. DOD’s culture has generally been resistant to changes in acquisition approaches, as we have reported, and fragmented responsibilities in DOD space programs have made it difficult to implement new processes and coordinate and deliver interdependent systems. Such challenges could, for example, hinder DOD’s efforts to examine options for acquisition efficiencies in military and commercial satellite communications services. Historically, DOD has procured commercial satellite communications services to augment military capacity and became increasingly reliant on these services to support ongoing military operations. DOD is looking for ways to better streamline procurements of these services, but according to DOD officials, it has had difficulty adhering to past policies that required centralized procurement, especially during operations in Iraq and Afghanistan, when efficiency was not a priority. Similarly, DOD has been unable to align the delivery of space system segments in part because budgeting authority for the segments is spread across the military services and DOD lacks a single authority to ensure programs are funded in a manner that aligns their deliveries. As programs continue to face challenges in aligning components, the warfighter cannot take advantage of full system capabilities, and the large investments into these programs are not fully exploited. DOD has begun implementing efforts to address some leadership challenges more recently. For example, increased use of shared satellite control networks and leading practices within DOD could reduce fragmentation and potential duplication associated with dedicated systems, resulting in millions of dollars in savings annually. In response to our recommendation, DOD has developed a department-wide plan, currently in final coordination, to support the implementation of alternative methods for performing satellite control operations to achieve optimal systems. It has also taken actions to better coordinate and plan space situational awareness activities—efforts to detect, track, and characterize space objects and space-related events—an area where we identified leadership disconnects in a 2011 report. However, in both cases, it is too early to tell whether such efforts will be effective. In closing, we recognize DOD has made strides in recent years in enhancing its management and oversight of space acquisitions and that sustaining our superiority in space is inherently challenging, both from a technical perspective and a management perspective. Further, to its credit, DOD is looking for ways to provide more avenues for innovation, competition, efficiency, and resilience. This is not easy to do in light of the importance of space programs to military operations, external pressures, and the complicated nature of the national security space enterprise. At the same time, there are persistent problems affecting space programs that need to be addressed if DOD is to be successful in introducing change. Our past recommendations have focused on steps DOD can take to address these problems. While DOD should not refrain from considering new approaches, we continue to believe it should complement these efforts with adequate knowledge about costs, benefits, and alternatives; more focused leadership; and sustained dedication to improving acquisition management, as we previously recommended. Not doing so will likely mean a repeat of DOD’s space system acquisition history characterized by cost growth, inefficient operations, and delayed capabilities to the warfighter. We look forward to continuing to work with the Congress and DOD to improve military space system acquisition efforts and outcomes. Chairman Sessions, Ranking Member Donnelly, this completes my prepared statement. I would be happy to respond to any questions you and Members of the Subcommittee may have at this time. For further information about this statement, please contact Cristina Chaplain at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Rich Horiuchi, Assistant Director; Claire Buck; Maricela Cherveny; Desiree Cunningham; Brenna Guarneros; John Krump; Krista Mantsch; Roxanna Sun; and Bob Swierczek. Key contributors for the previous work on which this testimony is based are listed in the products cited. Key contributors to related ongoing work include Raj Chitikila; Erin Cohen; Tana Davis; Art Gallegos; Jamie Haynes; Laura Hook; and Breanna Trexler. Space Acquisitions: Space Based Infrared System Could Benefit from Technology Insertion Planning. GAO-15-366. Washington, D.C.: April 2, 2015. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-15-342SP. Washington, D.C.: March 12, 2015. DOD Space Systems: Additional Knowledge Would Better Support Decisions about Disaggregating Large Satellites. GAO-15-7. Washington, D.C.: October 31, 2014. Space Acquisitions: Acquisition Management Continues to Improve but Challenges Persist for Current and Future Programs. GAO-14-382T. Washington, D.C.: March 12, 2014. U.S. Launch Enterprise: Acquisition Best Practices Can Benefit Future Efforts. GAO-14-776T. Washington, D.C.: July 16, 2014. The Air Force’s Evolved Expendable Launch Vehicle Competitive Procurement. GAO-14-377R. Washington, D.C.: March 4, 2014. 2014 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-14-343SP. Washington, D.C.: April 8, 2014. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-14-340SP. Washington, D.C.: March 31, 2014. Space Acquisitions: Assessment of Overhead Persistent Infrared Technology Report. GAO-14-287R. Washington, D.C.: January 13, 2014. Evolved Expendable Launch Vehicle: Introducing Competition into National Security Space Launch Acquisitions. GAO-14-259T. Washington, D.C.: March 5, 2014. Space: Defense and Civilian Agencies Request Significant Funding for Launch-Related Activities. GAO-13-802R. Washington, D.C.: September 9, 2013. Global Positioning System: A Comprehensive Assessment of Potential Options and Related Costs is Needed. GAO-13-729. Washington, D.C.: September 9, 2013. Space Acquisitions: DOD Is Overcoming Long-Standing Problems, but Faces Challenges to Ensuring Its Investments are Optimized. GAO-13-508T. Washington, D.C.: April 24, 2013. Launch Services New Entrant Certification Guide. GAO-13-317R. Washington, D.C.: February 7, 2013. Satellite Control: Long-Term Planning and Adoption of Commercial Practices Could Improve DOD’s Operations. GAO-13-315. Washington, D.C.: April 18, 2013. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-13-294SP. Washington, D.C.: March 28, 2013. Evolved Expendable Launch Vehicle: DOD Is Addressing Knowledge Gaps in Its New Acquisition Strategy. GAO-12-822. Washington, D.C.: July 26, 2012. Space Acquisitions: DOD Faces Challenges in Fully Realizing Benefits of Satellite Acquisition Improvements. GAO-12-563T. Washington, D.C.: March 21, 2012. Space Acquisitions: DOD Delivering New Generations of Satellites, but Space System Acquisition Challenges Remain. GAO-11-590T. Washington, D.C.: May 11, 2011. Space Acquisitions: Development and Oversight Challenges in Delivering Improved Space Situational Awareness Capabilities. GAO-11-545. Washington, D.C.: May 27, 2011. Space and Missile Defense Acquisitions: Periodic Assessment Needed to Correct Parts Quality Problems in Major Programs. GAO-11-404. Washington, D.C.: June 24, 2011. Global Positioning System: Challenges in Sustaining and Upgrading Capabilities Persist. GAO-10-636. Washington, D.C.: September 15, 2010. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOD's space systems provide critical capabilities to military and other government operations. Over the last decade, DOD space system acquisitions have been characterized by the long-standing problem of program costs increasing significantly from original cost estimates. Given this, DOD must manage system acquisition carefully and avoid repeating past problems. This testimony focuses on (1) the current status and cost of major DOD space system acquisitions and (2) how DOD will address future space-based mission needs. It is based on GAO reports on space programs and weapon system acquisition best practices over the past 6 years; space-related work supporting GAO's 2015 weapon system assessments and GAO's 2014 report on duplication, overlap, and fragmentation; updates on cost increases and improvements; and preliminary observations from ongoing work. The updates are based on GAO analysis of DOD funding estimates for selected major space system acquisition programs for fiscal years 2014 through 2019. Ongoing work includes analyzing program status documents, reviewing acquisition strategies, and interviewing relevant DOD officials and contractors. In recent and ongoing work, GAO has found that several space system acquisitions have largely overcome acquisition challenges—such as matching resources to requirements, facilitating competition, and parts quality issues—and are producing and launching satellites. But other programs continue to face difficulties, both in technology development and in ensuring ground and user systems are delivered in time to maximize a satellite's capability. Specifically: GAO reported in 2015 that, while the most recent Global Positioning System (GPS) III satellite program took steps to avoid past problems with satellite acquisitions, it is facing more than a 2-year delay for its satellite launch due to development problems. A complete GPS III satellite has not yet been tested, and the program is now rebaselining its cost estimates as a result of the schedule delay and associated increased costs. The next-generation ground system needed to operate GPS satellites has experienced significant schedule delays and cost growth, and is still facing technical challenges. During development, the contractor encountered problems that led to significant rework, delaying the delivery of the ground system. As a result, as GAO's ongoing work for this committee is finding, some GPS satellite capability will likely go unused for several years while the capability of the ground system catches up to the functionality of the satellites. While new missile warning satellites are now on orbit after years of delays and significant cost growth, the ground system needed to operate the satellites is still in development, meaning the satellites cannot be fully utilized—complete and usable data from the satellites will not be available until over 5 years after the first satellite was launched, based on recent updates. The Department of Defense (DOD) also faces challenges in providing future space-based capabilities. In October 2014, GAO reported that fiscal constraints and growing threats to space systems have led DOD to consider alternatives such as disaggregating—or breaking up—large satellites into multiple, smaller satellites or payloads, and introducing competition into the acquisition of launch services. DOD is assessing options for future capabilities in several key mission areas through analyses of alternatives, comparing multiple potential solutions to satisfy capability needs. However, the time frames for making decisions about the way forward are narrowing, and if not made in time, DOD may be forced to continue with existing approaches for its next systems, as GAO reported in April 2015. Implementing any new approaches will be difficult if DOD does not overcome long-standing leadership problems for its space programs, including cultural resistance to acquisition process changes and fragmented responsibilities. More recently, DOD has taken steps to address some of these leadership challenges, though it is too early to tell whether such efforts will be effective. Past GAO reports have generally recommended that DOD adopt best practices for developing space systems. DOD has agreed and is currently implementing those practices. Consequently, GAO is not making any recommendations in this testimony.
DHS has begun to take action to work with other agencies to identify facilities that are required to report their chemical holdings to DHS but may not have done so. The first step of the CFATS process is focused on identifying facilities that might be required to participate in the program. The CFATS rule was published in April 2007, and appendix A to the rule, published in November 2007, listed 322 chemicals of interest and the screening threshold quantities for each. As a result of the CFATS rule, about 40,000 chemical facilities reported their chemical holdings and their quantities to DHS’s ISCD. In August 2013, we testified about the ammonium nitrate explosion at the chemical facility in West, Texas, in the context of our past CFATS work. Among other things, the hearing focused on whether the West, Texas, facility should have reported its holdings to ISCD given the amount of ammonium nitrate at the facility. During this hearing, the Director of the CFATS program remarked that throughout the existence of CFATS, DHS had undertaken and continued to support outreach and industry engagement to ensure that facilities comply with their reporting requirements. However, the Director stated that the CFATS regulated community is large and always changing and DHS relies on facilities to meet their reporting obligations under CFATS. At the same hearing, a representative of the American Chemistry Council testified that the West, Texas, facility could be considered an “outlier” chemical facility, that is, a facility that stores or distributes chemical-related products, but is not part of the established chemical industry. Preliminary findings of the CSB investigation of the West, Texas, incident showed that although certain federal agencies that regulate chemical facilities may have interacted with the facility, the ammonium nitrate at the West, Texas, facility was not covered by these programs. For example, according to the findings, the Environmental Protection Agency’s (EPA) Risk Management Program, which deals with the accidental release of hazardous substances, covers the accidental release of ammonia, but not ammonium nitrate. As a result, the facility’s consequence analysis considered only the possibility of an ammonia leak and not an explosion of ammonium nitrate. On August 1, 2013, the same day as the hearing, the President issued Executive Order 13650–Improving Chemical Facility Safety and Security, which was intended to improve chemical facility safety and security in coordination with owners and operators. The executive order established a Chemical Facility Safety and Security Working Group, composed of representatives from DHS; EPA; and the Departments of Justice, Agriculture, Labor, and Transportation, and directed the working group to identify ways to improve coordination with state and local partners; enhance federal agency coordination and information sharing; modernize policies, regulations and standards; and work with stakeholders to identify best practices. In February 2014, DHS officials told us that the working group has taken actions in the areas described in the executive order. For example, according to DHS officials, the working group has held listening sessions and webinars to increase stakeholder input, explored ways to share CFATS data with state and local partners to increase coordination, and launched a pilot program in New York and New Jersey aimed at increasing federal coordination and information sharing. DHS officials also said that the working group is exploring ways to better share information so that federal and state agencies can identify non-compliant chemical facilities and identify options to improve chemical facility risk management. This would include considering options to improve the safe and secure storage, handling, and sale of ammonium nitrate. DHS has also begun to take actions to enhance its ability to assess risk and prioritize facilities covered by the program. For the second step of the CFATS process, facilities that possess any of the 322 chemicals of interest at levels at or above the screening threshold quantity must first submit data to ISCD via an online tool called a Top- Screen.an assessment as to whether facilities are covered under the program. If DHS determines that they are covered by CFATS, facilities are to then submit data via another online tool, called a security vulnerability assessment, so that ISCD can further assess their risk and prioritize the ISCD uses the data submitted in facilities’ Top Screens to make covered facilities. ISCD uses a risk assessment approach to develop risk scores to assign chemical facilities to one of four final tiers. Facilities placed in one of these tiers (tier 1, 2, 3, or 4) are considered to be high risk, with tier 1 facilities considered to be the highest risk. The risk score is intended to be derived from estimates of consequence (the adverse effects of a successful attack), threat (the likelihood of an attack), and vulnerability (the likelihood of a successful attack, given an attempt). ISCD’s risk assessment approach is composed of three models, each based on a particular security issue: (1) release, (2) theft or diversion, and (3) sabotage, depending on the type of risk associated with the 322 chemicals. Once ISCD estimates a risk score based on these models, it assigns the facility to a final tier. Our prior work showed that the CFATS program was using an incomplete risk assessment approach to assign chemical facilities to a final tier. Specifically, in April 2013, we reported that the approach ISCD used to assess risk and make decisions to place facilities in final tiers did not consider all of the elements of consequence, threat, and vulnerability associated with a terrorist attack involving certain chemicals. For example, the risk assessment approach was based primarily on consequences arising from human casualties, but did not consider economic criticality consequences, as called for by the 2009 National Infrastructure Protection Plan (NIPP) and the CFATS regulation. In April 2013, we reported that ISCD officials told us that, at the inception of the CFATS program, they did not have the capability to collect or process all of the economic data needed to calculate the associated risks and they were not positioned to gather all of the data needed. They said that they collected basic economic data as part of the initial screening process; however, they would need to modify the current tool to collect more sufficient data. We also found that the risk assessment approach did not consider threat for approximately 90 percent of tiered facilities. Moreover, for the facilities that were tiered using threat considerations, ISCD was using 5-year-old data. We also found that ISCD’s risk assessment approach was not consistent with the NIPP because it did not consider vulnerability when developing risk scores. When assessing facility risk, ISCD’s risk assessment approach treated every facility as equally vulnerable to a terrorist attack regardless of location and on-site security. As a result, in April 2013 we recommended that ISCD enhance its risk assessment approach to incorporate all elements of risk and conduct a peer review after doing so. ISCD agreed with our recommendations, and in February 2014, ISCD officials told us that they were taking steps to address them and recommendations of a recently released Homeland Security Studies and Analysis Institute (HSSAI) report that examined the CFATS risk assessment model. As with the findings in our report, HSSAI found, among other things, that the CFATS risk assessment model inconsistently considers risks across different scenarios and that the model does not adequately treat facility vulnerability. Overall, HSSAI recommended that ISCD revise the current risk-tiering model and create a standing advisory committee—with membership drawn from government, expert communities, and stakeholder groups—to advise DHS on significant changes to the methodology. In February 2014, senior ISCD officials told us that they have developed an implementation plan that outlines how they plan to modify the risk assessment approach to better include all elements of risk while incorporating our findings and recommendations and those of HSSAI. Moreover, these officials stated that they have completed significant work with Sandia National Laboratory with the goal of including economic consequences into their risk tiering approach. They said that the final results of this effort to include economic consequences will be available in the summer of 2014. With regard to threat and vulnerability, ISCD officials said that they have been working with multiple DHS components and agencies, including the Transportation Security Administration and the Coast Guard, to see how they consider threat and vulnerability in their risk assessment models. ISCD officials said that they anticipate that the changes to the risk tiering approach should be completed within the next 12 to 18 months. We plan to verify this information as part of our recommendation follow-up process. DHS has begun to take action to lessen the time it takes to review site security plans which could help DHS reduce the backlog of plans awaiting review. For the third step of the CFATS process, ISCD is to review facility security plans and their procedures for securing these facilities. Under the CFATS rule, once a facility is assigned a final tier, it is to submit a site security plan or participate in an alternative security program in lieu of a site security plan. The security plan is to describe security measures to be taken and how such measures are to address applicable risk-based performance standards. After ISCD receives the site security plan, the plan is reviewed using teams of ISCD employees (i.e., physical, cyber, chemical, and policy specialists), contractors, and ISCD inspectors. If ISCD finds that the requirements are satisfied, ISCD issues a letter of authorization to the facility. After ISCD issues a letter of authorization to the facility, ISCD is to then inspect the facility to determine if the security measures implemented at the site comply with the facility’s authorized plan. If ISCD determines that the site security plan is in compliance with the CFATS regulation, ISCD approves the site security plan, and issues a letter of approval to the facility, and the facility is to implement the approved site security plan. In April 2013, we reported that it could take another 7 to 9 years before ISCD would be able to complete reviews of the approximately 3,120 plans in its queue at that time. As a result, we estimated that the CFATS regulatory regime, including compliance inspections (discussed in the next section), would likely not be implemented for 8 to 10 years. We also noted in April 2013 that ISCD had revised its process for reviewing facilities’ site security plans. ISCD officials stated that they viewed ISCD’s revised process to be an improvement because, among other things, teams of experts reviewed parts of the plans simultaneously rather than sequentially, as had occurred in the past. In April 2013, ISCD officials said that they were exploring ways to expedite the process, such as streamlining inspection requirements. In February 2014, ISCD officials told us that they are taking a number of actions intended to lessen the time it takes to complete reviews of remaining plans including the following: providing updated internal guidance to inspectors and ISCD updating the internal case management system; providing updated external guidance to facilities to help them better prepare their site security plans; conducting inspections using one or two inspectors at a time over the course of 1 day, rather than multiple inspectors over the course of several days; conducting pre-inspection calls to the facility to help resolve technical issues beforehand; creating and leveraging the use of corporate inspection documents (i.e., documents for companies that have over seven regulated facilities in the CFATS program); supporting the use of alternative security programs to help clear the backlog of security plans because, according to DHS officials, alternative security plans are easier for some facilities to prepare and use; and taking steps to streamline and revise some of the on-line data collection tools such as the site security plan to make the process faster. It is too soon to tell whether DHS’s actions will significantly reduce the amount of time needed to resolve the backlog of site security plans because these actions have not yet been fully implemented. In April 2013, we also reported that DHS had not finalized the personnel surety aspect of the CFATS program. The CFATS rule includes a risk- based performance standard for personnel surety, which is intended to provide assurance that facility employees and other individuals with access to the facility are properly vetted and cleared for access to the facility. In implementing this provision, we reported that DHS intended to (1) require facilities to perform background checks on and ensure appropriate credentials for facility personnel and, as appropriate, visitors with unescorted access to restricted areas or critical assets, and (2) check for terrorist ties by comparing certain employee information with the federal government’s consolidated terrorist watch list. However, as of February 2014, DHS had not finalized its information collection request that defines how the personnel surety aspect of the performance standards will be implemented. Thus, DHS is currently approving facility security plans conditionally whereby plans are not to be finally approved until the personnel surety aspect of the program is finalized. According to ISCD officials, once the personnel surety performance standard is finalized, they plan to reexamine each conditionally approved plan. They would then make final approval as long as ISCD had assurance that the facility was in compliance with the personnel surety performance standard. As an interim step, in February 2014, DHS published a notice about its Information Collection Request (ICR) for personnel surety to gather information and comments prior to submitting the ICR to the Office According of Management and Budget (OMB) for review and clearance.to ISCD officials, it is unclear when the personnel surety aspect of the CFATS program will be finalized. During a March 2013 hearing on the CFATS program, industry officials discussed using DHS’s Transportation Worker Identification Credential (TWIC) as one approach for implementing the personnel surety program. The TWIC, which is also discussed in DHS’s ICR, is a biometric credential issued by DHS for maritime workers who require unescorted access to secure areas of facilities and vessels regulated under the Maritime Transportation Security Act of 2002 (MTSA). In discussing TWIC in the context of CFATS during the August 2013 hearing, officials representing some segments of the chemical industry stated that they believe that using TWIC would lessen the reporting burden and prevent facilities from having to submit additional personnel information to DHS while maintaining the integrity of the program. In May 2011, and May 2013, we reported that the TWIC program has some shortfalls—including challenges in development, testing, and implementation—that may limit its usefulness with regard to the CFATS program. We recommended that DHS take steps to resolve these issues, including completing a security assessment that includes addressing internal controls weaknesses, among other things. The explanatory statement accompanying the Consolidated Appropriations Act, 2014, directed DHS to complete the recommended security assessment.February 2014, DHS had not yet done the assessment, and although However, as of DHS had taken some steps to conduct an internal control review, it had not corrected all the control deficiencies identified in our report. DHS reports that it has begun to perform compliance inspections at regulated facilities. The fourth step in the CFATS process is compliance inspections by which ISCD determines if facilities are employing the measures described in their site security plans. During the August 1, 2013, hearing on the West, Texas, explosion, the Director of the CFATS program stated that ISCD planned to begin conducting compliance inspections in September 2013 for facilities with approved site security plans. The Director further noted that the inspections would generally be conducted approximately 1 year after plan approval. According to ISCD, as of February 24, 2014, ISCD had conducted 12 compliance inspections. ISCD officials stated that they have considered using third-party non- governmental inspectors to conduct inspections but thus far do not have any plans to do so. In closing, we anticipate providing oversight over the issues outlined above and look forward to helping this and other committees of Congress continue to oversee the CFATS program and DHS’s progress in implementing this program. Currently, the explanatory statement accompanying the Consolidated and Further Continuing Appropriations Act, 2013, directs GAO to continue its ongoing effort to examine the extent to which DHS has made progress and encountered challenges in developing CFATS. Additionally, once the CFATS program begins performing and completing a sufficient number of compliance inspections, we are mandated review those inspections along with various aspects of them. Chairman Carper, Ranking Member Coburn, and members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For information about this statement please contact Stephen L. Caldwell, at (202) 512-9610 or caldwellS@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this and our prior work included John F. Mortin, Assistant Director; Jose Cardenas, Analyst-in-Charge; Chuck Bausell; Michele Fejfar; Jeff Jensen; Tracey King; Marvin McGill; Jessica Orr; Hugh Paquette, and Ellen Wolfe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Facilities that produce, store, or use hazardous chemicals could be of interest to terrorists intent on using toxic chemicals to inflict mass casualties in the United States. As required by statute, DHS issued regulations establishing standards for the security of these facilities. DHS established the CFATS program to assess risk at facilities covered by the regulations and inspect them to ensure compliance. This statement provides observations on DHS efforts related to the CFATS program. It is based on the results of previous GAO reports issued in July 2012 and April 2013 and a testimony issued in February 2014. In conducting the earlier work, GAO reviewed DHS reports and plans on the program and interviewed DHS officials. In managing its Chemical Facility Anti-Terrorism Standards (CFATS) program, the Department of Homeland Security (DHS) has a number of efforts underway to identify facilities that are covered by the program, assess risk and prioritize facilities, review and approve facility security plans, and inspect facilities to ensure compliance with security regulations. Identifying facilities. DHS has begun to work with other agencies to identify facilities that should have reported their chemical holdings to CFATS, but may not have done so. DHS initially identified about 40,000 facilities by publishing a CFATS rule requiring that facilities with certain types and quantities of chemicals report certain information to DHS. However, a chemical explosion in West, Texas last year demonstrated the risk posed by chemicals covered by CFATS. Subsequent to this incident, the President issued Executive Order 13650 which was intended to improve chemical facility safety and security in coordination with owners and operators. Under the executive order, a federal working group is sharing information to identify additional facilities that are to be regulated under CFATS, among other things. Assessing risk and prioritizing facilities. DHS has begun to enhance its ability to assess risks and prioritize facilities. DHS assessed the risks of facilities that reported their chemical holdings in order to determine which ones would be required to participate in the program and subsequently develop site security plans. GAO's April 2013 report found weaknesses in multiple aspects of the risk assessment and prioritization approach and made recommendations to review and improve this process. In February 2014, DHS officials told us they had begun to take action to revise the process for assessing risk and prioritizing facilities. Reviewing security plans. DHS has also begun to take action to speed up its reviews of facility security plans. Per the CFATS regulation, DHS is to review security plans and visit the facilities to make sure their security measures meet the risk-based performance standards. GAO's April 2013 report found a 7- to 9-year backlog for these reviews and visits, and DHS has begun to take action to expedite these activities. As a separate matter, one of the performance standards—personnel surety, under which facilities are to perform background checks and ensure appropriate credentials for personnel and visitors as appropriate—is being developed. Of the facility plans DHS had reviewed as of February 2014, it conditionally approved these plans pending final development of the personal surety performance standard. According to DHS officials, it is unclear when the standard will be finalized. Inspecting to verify compliance. In February 2014, DHS reported it had begun to perform inspections at facilities to ensure compliance with their site security plans. According to DHS, these inspections are to occur about 1 year after facility site security plan approval. Given the backlog in plan approvals, this process has started recently and GAO has not yet reviewed this aspect of the program. In a July 2012 report, GAO recommended that DHS measure its performance implementing actions to improve its management of CFATS. In an April 2013 report, GAO recommended that DHS enhance its risk assessment approach to incorporate all elements of risk, conduct a peer review, and gather feedback on its outreach to facilities. DHS concurred and has taken actions or has actions underway to address them.
In recent years, Congress passed two pieces of legislation intended, in part, to foster greater coordination among education, welfare, and employment and training programs. The Workforce Investment Act was passed in 1998 and fundamentally changed the nature of federally funded employment and training services. WIA replaced the former program with a new one that focused more on providing services to the general public. WIA also provides for greater consolidation in service delivery, requiring states and localities to use a centralized service delivery structure—the one-stop center system—to provide most federally funded employment and training assistance. The Temporary Assistance for Needy Families block grant, created 2 years earlier by the 1996 Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA), allowed states greater flexibility than ever before in designing employment and training services for clients receiving cash assistance. While TANF is not one of 17 federal programs mandated to provide services through the one-stop system, states and localities have the option to include TANF as a partner. Our prior work on pre-WIA one-stops found that states varied in the degree to which employment and training services for TANF clients were being coordinated through the one-stop system. WIA replaced the four Job Training Partnership Act (JTPA) programs for economically disadvantaged adults and youth, and dislocated workers, with three new ones—adult, dislocated worker, and youth—that de- emphasize the categorical nature of JTPA and allow for a broader range of services to be given to the general public. Services provided under WIA are markedly different from those provided under JTPA, no longer focusing exclusively on training. Instead, the adult and dislocated worker programs provide for three tiers, or levels, of service: core (basic services such as job search assistance); intensive (staff-intensive services such as assessment and case management); and training for eligible individuals.To gauge the success of WIA-funded programs, states and localities are held accountable through the use of 17 different performance measures that focus on outcomes such as getting and keeping a job. States negotiate with the Department of Labor to determine the level of performance they are expected to achieve for each of the measures; localities, in turn, negotiate with the states to determine their expected levels of performance. In addition, to establishing the three new programs, WIA requires that states and localities use the one-stop center system to provide services for these and many other employment and training programs. This one-stop system was developed by states prior to WIA, largely through One-Stop Planning and Implementation Grants from Labor. About 17 categories of programs, funded through four federal agencies—the departments of Labor, Education, Health and Human Services, and Housing and Urban Development—must provide services through the one-stop center system under WIA. WIA does not require that all program services be provided on site (or colocated)—they may be provided through electronic linkages with partner agencies or by referral—but WIA does require that the partners’ relationships and services be spelled out in a memorandum of understanding. While several programs are required by WIA to provide services through the one-stop centers, others have been left to the discretion of state and local officials, including the TANF block grant program. Flexibility is also a key feature of the TANF program. Under TANF, states have more flexibility than under its predecessor programs to determine the nature of financial assistance, the types of client services, the structure of the program, and how services are to be delivered. At the same time, TANF established new accountability measures for states—focused in part on meeting work requirements—and a 5-year lifetime limit on federal TANF assistance. These measures heighten the importance of helping TANF recipients find work quickly and retain employment. As states have used the new flexibility under TANF and have focused more on employment, the importance of coordinating services for TANF clients has received increased attention. To help clients get and keep jobs, states need to address problems that may interfere with employment, such as child care and transportation issues and mental and physical health problems. Frequently, solving these problems requires those who work directly with clients to draw on other federal and state programs, often administered by other agencies, to provide a wide array of services. While local welfare agencies have typically administered TANF, Food Stamps, and Medicaid, other programs that provide key services to TANF clients are administered by housing authorities, education agencies, and state employment services offices. TANF’s focus on employment means that welfare agencies may need to work more closely than before with state and local workforce development systems, in part, to help reduce the burden on employers who might need to respond to requests from multiple government agencies. In the past, under the Work Incentive Program, welfare agencies and workforce development systems collaborated at some level, but our previous work on pre-WIA programs found wide variation in the degree to which the welfare and non-welfare programs worked together to provide employment and training services. State and local efforts to coordinate their TANF and WIA programs increased in 2001, at least 1 year after all states implemented WIA. Nearly all states reported some coordination at the state or local level, achieved by means ranging from informal linkages (such as information sharing or periodic program referrals) to formal linkages (such as memorandums of understanding), shared intake, or integrated case management. Coordination of TANF-related services with one-stop centers increased from 2000 to 2001, and the form of coordination—colocation of services, electronic linkages or client referral—was based, in part, on the type of services provided—TANF work services, TANF cash assistance, or support services—as well as state and local preferences and conditions. Nearly all states reported some linkages at the state level between their TANF and WIA agencies, and we saw modest increases in states’ efforts to coordinate the programs between 2000 and 2001. Twenty-eight states reported that in 2001 they made extensive use of formal linkages, such as memorandums of understanding and state-level formal agreements, between the agencies administering TANF and WIA, compared with 27 states in 2000. Similarly, states increased their use of coordinated planning in 2001, with 19 states reporting that they used it to a great extent, compared with 18 states in 2000 (see fig. 1). When we looked at states individually, we saw that many used additional coordination methods in 2001. Seventeen states indicated that the number of the state-level coordination methods they used to a great extent increased in 2001. In fact, in 2001, 9 states used all five of the coordination methods that we analyzed, up from 7 states in 2000. Increased coordination between TANF and WIA programs was also seen in the use of TANF funds to support one-stop center infrastructure or operations. The number of states using TANF funds to support one-stop centers increased to 36 in 2001 from 33 in 2000. In addition, the number of states ranking TANF as one of the three largest funding sources for their one-stop centers rose to 15 from 12. Some of the largest gains in program coordination between 2000 and 2001 were seen at the local level. Forty-four states reported that most of their one-stop centers had informal linkages, such as periodic program referrals or information sharing, with their TANF programs in 2001, compared with 35 states in 2000 (see fig. 2). Similarly, 16 states reported that most of their one-stop centers had shared intake or enrollment systems in 2001—up from 13 in 2000; and 15 states reported in 2001 that they used an integrated case management system in most of their one-stop centers—an increase of 1 state from our 2000 results. Also, our analysis suggests that more coordination methods are in use at the local level. The number of states that reported that most of their one-stop centers used all seven methods of local-level coordination increased in 2001 to 10 states from 7 in 2000. Increases in coordination between the TANF programs and one-stop centers were also seen in the use of the one-stop system to provide services to TANF clients. While the same number of states—24—reported in both 2000 and 2001 that services for the TANF work program were colocated at the majority of their one-stops, the use of electronic linkages or referrals increased. In 2001, 15 states reported that services for the TANF work program were either electronically linked to the majority of their one-stop centers or provided by referral between the two programs, compared with 11 states in 2000. About half of the states coordinated their TANF cash assistance or Food Stamps or Medicaid programs with the one-stop centers, electronically, or by referral in 2000 and 2001. State officials in both Connecticut and New Jersey reported that even though one-stop staff did not determine eligibility for Medicaid and Food Stamps at the one-stop centers, the staff were expected to refer clients to appropriate support services outside the one-stop centers. While not as prevalent as electronic linkages or referrals, colocation of cash assistance appeared to increase in 2001: 16 states reported that they provided cash assistance services at least part time at the majority of their one-stop centers, compared with 9 states in 2000. Colocation of Food Stamps and Medicaid remained the same: 7 states reported in both years that they provided those services at least part time at the majority of the one-stop centers. In general, the form of coordination between TANF and one-stops was different depending on the particular program services that were provided. For example, when the TANF work programs were coordinated through the one-stop centers, services were more likely to be colocated. TANF cash assistance and the Food Stamps and Medicaid programs were more likely to be connected electronically or by referrals (see fig. 3). Sometimes states instituted policies to further strengthen the relationships between the programs and to ensure that clients were connected to one-stop services. In Michigan, for example, TANF clients are required to attend an orientation session at the one-stop before they can receive cash assistance. Similarly, in Connecticut, where there were low participation rates for TANF clients at one-stop centers, the legislature enacted a law requiring TANF clients to use one-stop centers as a condition of receiving cash assistance. State and local officials told us that decisions about how services were delivered were based on state and local preferences and conditions. Some state and local officials expressed a preference for colocating TANF programs at one-stop centers. For example, officials in a local area in Louisiana believed that colocation of TANF programs at the one-stop center would benefit TANF clients by exposing them to the one-stop center’s employer focus. These officials also said that colocation would result in a more seamless service delivery approach, giving clients easier access to the services. Other officials preferred not to colocate all TANF- related programs. While they supported the colocation of TANF work programs, they believed that cash assistance, Food Stamps, or Medicaid should be provided elsewhere. For example, Michigan officials told us that keeping eligibility functions for TANF cash, Food Stamps, and Medicaid separate was beneficial, because welfare staff had more expertise in the provision of social services while Department of Labor staff were better equipped to provide work-related services. Still other officials were concerned about the colocation of any TANF-related programs, sharing a view that TANF clients required special attention and were best served by staff trained to address the barriers they may face in obtaining employment. Some officials saw the one-stop centers as better structured to serve clients whose participation was voluntary, whereas TANF clients are generally required to engage in work activities. Officials in Washington state, for example, reported that TANF clients need a higher level of supervision and more structured assistance to maintain participation in the program and achieve desired outcomes than they believed one-stop centers could provide. Despite apparent increases in coordination between the TANF programs and one-stops from 2000 to 2001, states and localities have continued to face challenges in coordinating their TANF work programs with one-stop centers. WIA funds may not be readily used to serve TANF clients in the one-stops because WIA’s performance measurement system discourages serving those clients who may not be successful. In addition, when TANF clients need training to achieve self-sufficiency, the amount of training available under WIA is generally less than what was historically provided under JTPA. Even when TANF funds were used in the one-stops, states and localities encountered challenges in coordinating services. Most of these challenges are similar to those we reported in 2000 when WIA was first implemented. When TANF clients are served in the one-stop, they may be eligible to receive services funded by a range of programs in the one-stop in addition to TANF—the primary one being WIA—but WIA funds may not always be made available to serve TANF clients. WIA’s performance measurement system, with its focus on achieving successful outcomes, such as getting and keeping a job, may actually discourage the one-stop from providing WIA-funded services to those that are less likely to be successful or those that are considered to be hard-to-serve. In addition, states and localities have reduced the amount of training they provide using WIA funds. WIA’s performance measurement system—one based on client outcomes, such as getting and keeping a job and increasing wages—may actually discourage localities from serving those clients who are less likely to be successful or those considered hard-to-serve. Under the new system, states and localities are expected to achieve levels of performance that have been negotiated in advance; not achieving these levels may mean financial sanctions for states and localities. In a recent study we reported that all the states we visited believed that the negotiated levels for some of their measures were set too high for them to meet. States reported that limitations in available baseline, or historic data, made it difficult to set fair, realistic performance levels. Some measures had no prior data available on which to set performance levels. Where baseline data were available, such as for the wage-related measures, the data were collected under JTPA, a program whose population focus was different from that of WIA. In addition, some states believe that the performance levels did not account for variations in economic conditions or for the many economically disadvantaged or hard-to-serve individuals seeking services in some local areas. As a result, state officials told us that, in order to meet their performance levels, local areas may not be registering or serving all clients eligible for services under WIA. One state official described how local areas were carefully screening potential WIA participants and holding meetings to decide whether to register them. TANF clients with little or no work histories may be particularly vulnerable to being screened out of WIA-funded services because of concerns over meeting job retention goals. Even without concerns over their ability to meet performance measures, the amount of training that states and localities are providing has been reduced under WIA. Because of PRWORA’s work participation rate requirements and some states’ work first approach—one that emphasizes obtaining employment quickly—few TANF clients may be considered eligible for training. Even when TANF clients are eligible for training, few training funds may be available under WIA. Training options for job seekers appear to be decreasing rather than increasing, as training providers reduce the number of course offerings they make available to WIA job seekers. As we reported previously, training providers say that the data collection burden resulting from participation in WIA can be significant and may discourage them from participating. For example, the requirement that training providers collect outcome data on all students in a class may mean calling hundreds of students to obtain placement and wage information, even if there is only one WIA-funded student in class. Even if they used other methods that may be less resource-intensive, training providers say that privacy limitations might limit their ability to collect or report student outcome data. As a result, training providers, including local community colleges, are frequently opting out of providing services funded by WIA. Training providers find the reporting requirements particularly burdensome given the relatively small number of individuals who have been sent for training. Given a new emphasis under WIA on focusing intensive services or training on those clients who are not successful getting a job, local workforce areas are often designing programs that use a work first approach. As a result, the amount of training they provide is often reduced from what was provided under JTPA. We recently reported that this work first approach often means that localities require job seekers to dedicate a set amount of time or a specific number of tasks to finding employment before receiving additional services, including training. For example, a counselor from a local area in Massachusetts told us that clients with marketable skills are expected to seek employment rather than additional training. As a result of these changes under WIA, the amount of training provided using WIA funds may be declining from the levels observed under JTPA. Even when officials chose to use the one-stop to provide most TANF- funded services, they told us they encountered challenges to coordination. Limited facilities, unavailability of one-stop centers in some areas, and few TANF clients in other areas, as well as incompatible computer systems sometimes mean that services cannot readily be colocated or coordinated in other ways, even when officials would otherwise choose to do so. In addition, incompatible program reporting requirements sometimes limit efforts to coordinate services. These challenges are the same ones we reported nearly 2 years ago when WIA was first implemented. Facilities. Limited facilities have hampered state and local efforts to bring services together through the one-stop system. Colocation of TANF services within the one-stop was not a viable option in many of the locations that we visited for a recent study. Officials in several states reported that available space at one-stop centers was limited and that the centers could not house additional programs or service providers. In addition, state officials explained that long-term leases or the use of state- owned buildings often prevented TANF work programs from relocating to one-stop centers. Local conditions, such as unavailability of one-stop centers in some areas and few TANF clients in other areas, may also mean that all TANF work programs are not easily colocated at one-stop centers. For example, officials in Alabama reported that although welfare agencies were located in every county, one-stop centers were less prevalent in their state. They believed it was impractical to have TANF-related services colocated at one-stop centers, because one-stop centers would be inaccessible to many TANF clients. In addition, officials in Illinois said that they were hesitant to coordinate the provision of work-related services for TANF clients at one-stop centers in areas where the TANF population had recently declined. Because of declining TANF caseloads in Illinois, state officials stressed the importance of allowing local areas the flexibility to determine how to coordinate TANF-related services with one-stop centers. Information Systems. The states that we visited reported that the inability to link the information systems of TANF work programs and one-stop centers complicated efforts to coordinate programs. A recent conference that we cosponsored also highlighted this issue, specifically identifying the age of information systems as inhibiting coordination efforts. The pressing need to modernize the systems stemmed from the shift in objectives under TANF—focusing more on preparing TANF clients for work than under previous welfare programs. This shift created new demands on information systems—systems that were often antiquated and limited in the ability to use new technologies, such as Web-based technologies. In addition, the systems used by agencies providing services to TANF clients did not provide for sharing client data, thus hindering the case management of clients. Some of these concerns were also raised during site visits and telephone interviews for our recent study. Some local officials said that they could not merge or share data and were not equipped to collect information on clients in different programs. TANF clients are often tracked separately from clients of other programs, and even Labor’s system, the One-Stop Operating System (OSOS), does not allow one-stop centers to include any programs outside of Labor’s programs, including TANF. In addition, other officials expressed concerns that sharing data across programs would violate confidentiality restrictions. State officials noted that although the focuses of TANF work and WIA programs were related, differences in program definitions—such as what constitutes work or what income level constitutes self-sufficiency—and the different reporting requirements attached to the various funding streams made coordination difficult. Each program has restrictions on how its money can be used and what type of indicators it can use to measure success. Because the federal measures evaluate very different things, tracking performance for the TANF and WIA programs together was difficult. Despite the flexibility in TANF, state officials felt constrained by the need to meet federally required work participation rates, and they told us that they used these federal requirements to gauge how well their TANF work programs were performing. For example, one state official was concerned that the state TANF agency was focused more on meeting work participation rates than on designing programs that might help TANF clients become self-sufficient. WIA, on the other hand, has a different set of performance measures geared toward client outcomes, including the degree to which clients’ earnings change over time and whether or not clients stay employed. The difficulty in coordinating services while tracking separate performance measures for multiple programs in the one-stop extends beyond TANF. We recently reported that separate performance measures impede cooperation of the one-stop program partners, as expressed by over one-third of the states surveyed. Some states even believed that separate measures caused competition among programs. In addition, there are currently no measures to gauge the overall success of the one-stop system in coordinating services and in meeting the needs of employers and job seekers. The Department of Labor has convened a working group to develop additional indicators of one-stop performance that states and localities could use, but these measures are not yet available. We have recommended to the Department of Labor that it ensure the development of these measures in enough time for states to implement them at the beginning of program year 2002. Despite these challenges, states and localities are designing and developing coordinated service delivery approaches at the one-stop centers, finding strategies to serve TANF clients and other job seekers by focusing their efforts on resolving some of the longstanding issues inherent in a fragmented system. In so doing, they have looked to the new requirements of WIA and focused on a broader range of services to meet the employment-related needs of the general public. In addition, they have begun to emphasize simultaneous services to both employers and job seekers. While no outcome data are yet available on the success of their work, some of their early efforts show promise for implementing an integrated workforce investment system. In designing services at one-stop centers, states and localities have sought to combine WIA’s emphasis on services to the general public with efforts to solve the problems that have existed in a fragmented employment and training system. For example, in earlier work, we identified some key problems that exist in a fragmented system, including (1) frustration for employers because of wasted time responding to multiple job inquiries for the same openings from several different government entities; (2) confusion on the part of job seekers and service providers because there was not a clear entry point or clear path from one program to another, nor was there ready access to program information; and (3) frustration for job seekers because programs were not tailored to meet their needs and because navigating the various programs to get needed assistance meant completing multiple intake and assessment procedures. To effectively coordinate their programs, states and localities needed to address these issues, while meeting the enhanced client focus of WIA. We identified the following key areas as critical to successfully integrating services under WIA: Attracting and serving employers in ways that minimize wasted time and reduce their frustration. Bringing job seekers to the one-stop centers to help them obtain ready access to employment and program information. Creating a customer friendly environment for job-seekers by reducing confusion, providing them with a clear entry point and clear path from one program to another. Providing services to job seekers that are tailored and seamless, and helping them identify and obtain needed program services without the burden of completing multiple intake and assessment procedures. Helping job seekers become self-sufficient by providing post-employment services that assist with job retention and advancement. Figure 4 charts the processes followed by customers passing through the system and each of the key areas in which we identified promising approaches. To effectively attract and better serve employers, many one-stop centers market their services, minimize the burden on employers who use the centers, and provide employer-focused services. Employers may be confused when multiple government agencies—such as the local welfare agency and the one-stop operator—both contact them to seek employment opportunities for their clients. To bring in employers and to reduce the frustration and confusion that they experienced when receiving contacts from multiple agencies, the centers we visited in an earlier study in Titusville and Melbourne, Florida, designated an individual or a team to serve as the center’s representative for an employer or employment sector, covering issues related to job listings and placements. In providing services to employers, centers in Dayton, Ohio, Janesville, Wisconsin, and in Utah allowed employers to use the one-stop facilities to recruit, interview, and test job candidates. One center in Florence, Kentucky, provided video-teleconferencing facilities so that candidates could be interviewed by employers who were located outside the local area. In a small center in Portland, Oregon, where facility space was limited, a desk was dedicated for employer use, allowing them to have a presence at the one-stop center and to recruit, screen, and interview candidates. To help bring in and serve small businesses, centers in New Orleans, Louisiana; Killeen, Texas; and Eugene, Oregon, were creating a business-only resource center within the one-stop center with a range of special resources that included Internet services, business-related reference material, and assistance with business tax questions. At the same time one-stop centers are attracting employers, they also need to attract job seekers, including those receiving TANF assistance, and make them aware of the centers’ resources. In our work on multiple employment programs, we found that job seekers were confused and frustrated by the limited information readily available on government programs that could help them and on where they could access this information. The centers we visited in another study found several ways to address these problems and bring in job seekers. For example, sites in New Jersey and Louisiana established satellite one-stop centers in public housing areas to bring in low-income job-seekers, including TANF clients, for services. In the Denver, Colorado, area, one-stop centers are specialized, each providing services to certain populations. Clients receiving TANF cash assistance are served by a one-stop that occupies the lower floor of a professional-looking new facility that house all welfare- related services. Other centers bring in customers by targeting services to younger members of the community, such as high school students. For example, the center in Racine, Wisconsin, established a youth resource area with computers and programs dedicated to career exploration. The center worked with the school system and has become a site for school field trips throughout the primary and secondary school years. And in Lafayette, Louisiana, officials created a separate youth-only one-stop center in the same building that contains a library and a substance abuse program. They also plan to locate information kiosks for youth services in the shopping malls. Once job seekers are inside the door of the one-stop center, the next step is to create a customer friendly environment—one that reduces confusion and provides a clear entry point to services. One-stop center operators told us that they try to find ways to avoid the atmosphere of a government office and the long waiting lines that have symbolized government transactions, like applying for welfare benefits or unemployment insurance. Almost without exception, one-stop centers we visited had an information desk directly inside the front door that was continually staffed by a receptionist or greeter. Some centers considered this position key to providing high-quality services to their clients. One center in Texas assigned only top performers to the information desk and regarded that assignment as an honor. Many centers, such as Dayton, Ohio, and Killeen, Texas, minimized the waiting time for services by performing a quick assessment at the information desk and then referred clients to service areas. One-stop centers in Utah featured an express desk to serve customers needing quick services. Instead of having to sit down with a job counselor or case manager, customers using the express desk could, for instance, obtain bus passes or electronic benefit transfer cards, or drop off required documents, such as what might be needed to support a claim for TANF or food stamp benefits. Some centers, such as the one in Janesville, Wisconsin, also used their resource rooms—where they maintain job listings, computers with Internet access, telephones, and fax machines— as the waiting area for specialized services, thus allowing the customers to use their wait time to accomplish necessary job search tasks. Many job seekers can meet all their needs in the self-service resource room. In fact, Labor officials expect that the majority of customers under WIA will receive needed services through self-service or with very limited assistance from staff. However, some clients, like many TANF recipients, may need more intensive case management and training services to help them get and keep a job. Trying to obtain just this type of intensive service has historically frustrated clients who were left on their own to navigate the array of federal programs, each with its own intake and assessment procedures. One-stop centers we visited often found ways to coordinate the services provided by multiple programs, creating a seamless approach to delivering services. One locality in Connecticut, for example, cross- trains case management staff to provide both TANF and WIA services. A new youth-only one-stop in Milwaukee cross-trains staff in all partner programs so that services are always available and delivered seamlessly. And in Killeen, Texas, where more than one case manager could be involved in a case, the center assigned a primary case manager who took the lead to coordinate most activities and assist the client in navigating the system. In many locations we visited, the case managers were aware of all the program services available to serve a client—including support services to enable a client to attend training or to get or keep a job—and tailored the services to meet the client’s needs. In our earlier work, we found tailoring of services to be a key feature in successful employment training programs. The efforts of the one-stop centers do not end once a client gets a job. The focus of post-employment services changes to one of helping the client retain the job or get a better job. Localities sometimes focused most of their post-employment efforts on TANF clients, often providing transportation services—helping clients get to and from a job. For example, a New Jersey one-stop provided van services to transport former TANF clients to and from job interviews and, once clients were employed, to and from their jobs, even during evening and night shifts. Similarly, a one-stop in Connecticut provided mileage reimbursement to current and former TANF clients for their expenses associated with going to and from their jobs. And in Louisiana, a one-stop we visited contracted with a nonprofit agency to provide van services to transport Welfare-to-Work grant recipients to and from work-related activities. Even though TANF was not made a mandatory partner under WIA, we see continuing evidence that many states and localities are increasing their efforts to bring services together to fit local needs. These changes, like all culture changes, will take time. It appears, however, that as the systems have matured and their shared purposes and goals have become more evident, many states and localities have found it advantageous to more formally coordinate their TANF and WIA services, although it is not happening everywhere. Many state and local officials hailed the flexibility in both the WIA and TANF programs as an important step in helping them to design their service delivery systems and to coordinate services where appropriate. But their efforts to bring services together continue to be hampered by the same obstacles that we reported nearly 2 years ago: limited capacity to develop the needed infrastructure—both in terms of facilities and information systems—and the need to respond to the multiple, sometimes incompatible, federal requirements of the separate programs. Despite the obstacles, some local areas have creatively found ways to coordinate services for their TANF clients through the one-stop system. However, as Congress moves toward reauthorizing TANF this year and WIA in 2003, consideration should be given to finding ways to remove remaining obstacles to coordinating services and focusing on client outcomes. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to any questions that you or other members of the subcommittee may have. If you or other members of the subcommittee have questions regarding this testimony, please contact Sigurd Nilsen at (202) 512-7215 or Dianne Blank at (202) 512-5654. Individuals making key contributions to this testimony included Kara Finnegan Irving, Rachel Weber, and Natalya Bolshun. Workforce Investment Act: Youth Provisions Promote New Service Strategies, but Additional Guidance Would Enhance Program Development. GAO-02-413. Washington, D.C.: April 5, 2002. Workforce Investment Act: Coordination between TANF Programs and One-Stop Centers Is Increasing, but Challenges Remain. GAO-02-500T. Washington, D.C.: March 12, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Human Services Integration: Results of a GAO Cosponsored Conference on Modernizing Information Systems. GAO-02-121. Washington, D.C.: January 31, 2002. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can Be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: October 4, 2001. Welfare Reform: Moving Hard-to-Employ Recipients Into the Workforce. GAO-01-368. Washington, D.C.: March 15, 2001. Multiple Employment Training Programs: Overlapping Programs Indicate Need for Closer Examination of Structure. GAO-01-71. Washington, D.C.: October 13, 2000. Welfare Reform: Work-Site Based Activities Can Play an Important Role in TANF Programs. GAO/HEHS-00-122. Washington, D.C.: July 28, 2000. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO/T-HEHS-00-145. Washington, D.C.: June 29, 2000. Welfare Reform: States’ Experiences in Providing Employment Assistance to TANF Clients. GAO/HEHS-99-22. Washington, D.C.: February 26, 1999.
A central focus of welfare reform has been to help needy adults with children find and keep jobs. The Workforce Investment Act of 1998 (WIA) unifies a fragmented employment and training system. Despite its similar fundamental focus, the Temporary Assistance for Needy Families (TANF) program was not required to participate in the one-stop system, although many states are coordinating their TANF services through one-stop centers. GAO found that coordination between TANF programs and WIA's one-stop centers has risen since WIA was first implemented in the spring of 2000. WIA funds may not be readily used to serve TANF clients in the one-stops because WIA's performance measures may be discourage serving clients who may not be successful. Moreover, when TANF clients need training to achieve self sufficiency, WIA funds may be unavailable because the amount of training provided under WIA has been reduced. Some local areas have found innovative ways to provide TANF services in the one-stops, often focusing on resolving the issues that had plagued the fragmented employee training system.
In pursuit of its mission to reduce poverty by supporting economic growth, MCC has identified and defined the following three key principles to guide its actions: Reward good policy. “Using objective indicators, countries are selected to receive assistance based on their performance in governing justly, investing in their citizens, and encouraging economic freedom.” Operate in partnership. “Countries that receive MCA assistance are responsible for identifying the greatest barriers to their own development, ensuring civil society participation, and developing a multi-year MCC compact.” Focus on results. “MCA assistance goes to those countries that have developed well-designed programs with clear objectives, benchmarks to measure expected results, procedures to ensure fiscal accountability for the use of MCA assistance, and a plan for effective monitoring and objective evaluation of results. Programs are designed so that recipient countries can sustain progress after the funding under the compact has ended.” MCC is a government corporation that is managed by a CEO appointed by the President with the advice and consent of the Senate and is overseen by a Board of Directors (MCC Board). The Secretary of State serves as board chair, and the Secretary of the Treasury serves as vice-chair. Other board members are the U.S. Trade Representative, the Administrator of the U.S. Agency for International Development (USAID), the CEO of MCC, and up to four Senate-confirmed public members who are appointed by the President from lists of individuals submitted by congressional leadership. The Millennium Challenge Act of 2003 requires MCC to select countries as eligible for MCA assistance each fiscal year. Countries with per capita income at or below a set threshold may be selected as eligible for assistance if they pass MCC indicator criteria and are not statutorily barred from receiving U.S. assistance. MCC uses 16 indicators divided into three categories: Ruling Justly, Encouraging Economic Freedom, and Investing in People. To be eligible for MCA assistance, countries must score above the median relative to their peers on at least half of the indicators in each category and above the median on the indicator for combating corruption. MCC used these quantitative indicators, as well as the discretion implicit in the Millennium Challenge Act, to select 17 countries as eligible to apply for MCA compact assistance for fiscal years 2004 and 2005. For fiscal year 2006, MCC identified 23 countries as eligible for assistance—the 17 previously selected and 6 additional countries, which included lower- middle-income countries eligible for the first time in fiscal year 2006. After MCC selects eligible countries, they may begin a four-phase process that can lead to the entry into force of compacts (see fig. 1). Each phase of this process is discussed after figure 1. 1. Country proposal development. Eligible countries are invited to submit compact proposals, which are to be developed in consultation with members of civil society, including the private sector and nongovernmental organizations (NGO). The eligible country also identifies an “accountable entity” to manage the programs funded by MCC. Eligible countries submitting proposals are not guaranteed funding; instead, MCC assesses proposals through its due diligence review. As of May 2006, 14 of the 17 countries selected as eligible in fiscal year 2004 or 2005, and 1 of the 6 countries selected as eligible for the first time in fiscal year 2006, had submitted proposals accepted by MCC for due diligence review. 2. MCC’s due diligence review. MCC determines whether the proposal that an eligible country has submitted meets MCC criteria to ensure that proposed programs will be effective and funds will be well-used. Due diligence primarily occurs between MCC’s acceptances of an “opportunity memo” and an “investment memo.” MCC assembles a transaction team of MCC staff, personnel from other U.S. agencies, and consultants to conduct a preliminary assessment of a country’s proposal and reports the team’s findings in an opportunity memo to the MCC investment committee. The MCC investment committee consists of MCC’s CEO, vice presidents, and other senior officials. If the opportunity memo is approved, the transaction team launches a detailed due diligence review. The team assesses the country proposal, reports its findings, and makes recommendations based on its assessment in an investment memo to the MCC investment committee. As of May 2006, MCC was conducting due diligence analyses of seven eligible country proposals. 3. Compact negotiation and MCC Board approval. MCC may enter into compact negotiations with the eligible country before the investment memo is completed. If compact negotiations are successful, MCC staff formally submit the compact for MCC Board approval. Once the board approves the compact, MCC and the eligible country may sign it. As of March 2006, MCC had signed compacts with 8 of the 17 countries determined eligible in fiscal years 2004 and 2005. (See fig. 2.) MCC commits the full amount of the compact funding at signing but obligates and begins to disburse funds to implement projects only after the compact has entered into force. Under the Millennium Challenge Act, compacts may remain in force no longer than 5 years. The compacts stipulate that, with limited exceptions, all funds must be spent during that time. 4. MCC and compact country complete entry-into-force requirements. MCC’s compact with each country identifies the following supplemental agreements that MCC and the country’s accountable entity must complete before the compact can enter into force. The disbursement agreement sets out the “conditions precedent” and other requirements for disbursements from MCC and redisbursements to any person or entity. These conditions include performance targets for projects outlined in the compact. The procurement agreement sets forth guidelines for all procurements of goods, works, and services financed with MCC funding. Compact term sheets for supplemental agreements, which vary by country, and include documents such as a governance agreement, fiscal agent agreement, form of implementing entity agreement, and form of bank agreement. After compacts enter into force, MCC may begin the disbursement of funds and countries may begin implementing projects. In the first eight compacts, approximately 53 percent of funding went to transportation and other infrastructure projects; 22 percent went to agriculture and rural development; 13 percent went to other project types; and 12 percent went to program management, monitoring, and evaluation. (See fig. 2.) The length of time from country eligibility selection to compact signature has varied, with proposal development and due diligence generally requiring the most time (see fig. 3). For the six countries whose compacts had entered into force as of the end of May 2006, completing the steps necessary for entry into force after compact signing took approximately 3 to 4 months for Madagascar, Cape Verde, and Honduras; approximately 7 months for Georgia; 2 months for Vanuatu; and about 10½ months for Nicaragua. For the two countries whose compacts had not entered into force as of the end of May 2006, 3 months had elapsed since compact signature for Benin, and 2 months had elapsed for Armenia (see fig. 3). MCC has issued guidance and policies for its compact development process in several stages. Before publishing its initial guidance in May 2004, MCC provided countries with preliminary guidance addressing fiscal accountability and monitoring and evaluation. Figure 4 shows the evolution of MCC’s published guidance relative to the end of the due diligence process with the investment memo. MCC undertook a wide range of activities in its due diligence of the Madagascar, Cape Verde, and Honduras proposals, while at the same time developing guidance on key aspects of the countries’ proposals. During due diligence, MCC primarily considered criteria related to the proposals’ consultative process, project coherence, environmental impact, institutional and financial sustainability, and economic analyses. MCC generally approved proposals that were based on a consultative process and returned proposals that lacked adequate consultations; however, MCC did not publish detailed criteria for the consultative process until 1 year after selecting the countries as eligible for MCC assistance. In assessing project coherence, MCC approved projects that were linked to the overall proposal objectives and rejected projects that were not, although its assessments used criteria that it had not yet published in its guidance. Additionally, MCC screened projects for likely environmental impacts and considered factors important for institutional and financial sustainability. Finally, MCC conducted economic analyses to assess the projects’ likely impact on economic growth. However, limitations in assumptions and data may have affected the analyses’ accuracy and led MCC to select projects that would not achieve its goals. Also, a lack of country involvement in the analyses does not reflect the MCC principle of working in partnership with countries and may have limited the countries’ understanding of the process. MCC’s due diligence for Madagascar, Cape Verde, and Honduras assessed whether the countries had consulted with public and private sector and civil society stakeholders during proposal development. MCC officials told us that before beginning due diligence for the three countries, they assessed the consultative process in proposal drafts and, on the basis of this review, returned proposals that were not sufficiently founded on a consultative process. For example, according to the officials, MCC did not accept one country’s proposal because, although the country had consulted with stakeholders, its proposal did not reflect the priorities identified during the consultative process. MCC documents indicate that during its due diligence for two of the three countries, MCC obtained the views of government, civil society, and private sector officials on how the governments conducted the consultative process. In addition, MCC assessed any previous experience the country had had with a consultative process For all three countries, MCC also reviewed factors such as the date, frequency, and locations of the process. MCC’s documentation of these assessments in Madagascar, Cape Verde, and Honduras indicated that key stakeholders had generally agreed on the proposed compact priorities. However, the MCC assessments noted some weaknesses in the Madagascar and Honduras governments’ management of the process: In Madagascar, some groups expressed concern to MCC that the government had provided short notice (less than 10 days) for consultative meetings, and that this might have limited rural groups’ participation. Additionally, the government did not communicate to participants its rationale for accepting or rejecting projects. However, despite these shortcomings, MCC noted “widespread agreement and enthusiasm for the…primary components of proposal” among the business community, local and international NGOs, civil society, and donors. In Honduras, MCC found, on the basis of discussions with local civil society organizations and international NGOs, that weaknesses—such as the large size of meetings—had limited effective participation in the consultative process for the poverty reduction strategy, which formed the basis of Honduras’s MCC proposal. MCC’s assessment also indicated that the government had not directly asked the consulted groups to identify obstacles to growth. However, MCC found that these groups concurred with the priorities identified in the country proposal. According to MCC officials, they followed up with the Honduran government to address the weakness noted by MCC in Honduras’s consultative process. As a result, according to MCC documentation, the Honduran government conducted additional consultative sessions with civil society organizations and donors. MCC’s due diligence of the countries also noted requirements for additional consultations during project implementation. For example, in some cases, countries were to undertake consultations with local stakeholders to identify project sites and conduct environmental assessments. Our discussions with representatives of civil society groups and donors in Madagascar and Cape Verde indicated that they generally concurred with the compact proposals. In Madagascar, a representative from a key civil society organization noted weaknesses in the government’s conduct of the consultative process similar to those recorded by MCC. Nevertheless, stakeholders on the Madagascar advisory council, which includes various civil society and other organizations, said that the compact proposal generally accounted for their views, especially in comparison with other donor programs. As a relatively new organization, MCC conducted due diligence reviews while it was developing consultative process guidance: Evolving guidance. While developing their proposals, the countries had access to general criteria in MCC’s 2004 guidance; however, in assessing the proposals, MCC applied the more specific criteria contained in its detailed 2005 guidance. As figure 5 shows, the 2005 guidance was issued 1 year after MCC announced the eligibility of Madagascar, Honduras, and Cape Verde. According to MCC’s Guidance for Developing Proposals for MCA Assistance in FY 2004, “…each proposal is expected to reflect the results of an open consultative process, integrating governmental interests with those of the private sector and civil society.” The 2004 guidance required that proposals include a description of the consultative process, such as how the proposal takes into account local-level perspectives of the country’s rural and urban poor, including women, and of private and voluntary organizations and the business community. Additionally, the guidance required the country to list all key participants, such as government and nongovernmental officials, who played a significant role in developing the proposal. MCC’s 2005 Guidance on the Consultative Process more specifically requires eligible country governments to involve their citizens in identifying obstacles to economic growth and developing and prioritizing the development strategies and programs that will be included in the compact proposal. The 2005 guidance further states that an adequate consultative process should be timely, participatory, and meaningful. MCC’s guidance also took into account the country’s experience in using a consultative process to develop other national or poverty reduction strategies. If the compact was built on these consultations, MCC required some additional consultations to provide justification of country priorities in the MCA proposal. Incomplete documentation. MCC’s documentation of its due diligence for the three countries presents summary findings, rather than an analysis of the extent to which the countries consulted with the rural and urban poor. For example, MCC’s due diligence documentation indicates that the countries’ governments included women’s groups and rural sector groups in their consultative process; however, the documentation does not indicate the extent to which these groups represented the poor. Additionally, in Madagascar and Honduras, MCC’s documentation does not indicate how MCC assessed the extent to which the consulted groups informed compact proposal priorities. In keeping with MCC’s emphasis on results, due diligence for the three countries also assessed whether proposed projects were linked to one or more of the country’s compact goals, and whether the projects addressed key impediments or constraints to achieving these goals. On the basis of its due diligence assessment, MCC rejected projects that were not linked to key constraints in two of the three countries we reviewed. Specifically, MCC rejected tourism and preschool education projects proposed by Honduras because they were not linked to the impediments to growth that emerged from the consultative process. MCC also rejected projects to construct feeder roads, which connect some watershed areas to the markets, and provide access to electricity in the rural areas of Cape Verde, because MCC’s due diligence did not indicate that these projects addressed key constraints in Cape Verde. In reviewing MCC’s due diligence for the three countries, we found that MCC did not issue guidance stating that proposed projects should be linked to the compact goal until after it had concluded its due diligence assessments. MCC’s 2004 guidelines for proposal development broadly instruct eligible countries to identify priority areas, such as health or education, and their expected goals for each priority area over the term of the proposed compact. The guidance also asks the countries to show how these strategic goals are related to the economic growth and poverty reduction of the country. MCC’s November 2005 MCC Compact Assessment and Approval Guidelines more specifically indicates that MCC will assess how the project addresses compact goals. However, MCC issued the November 2005 guidance after completing its due diligence for the three countries. (See fig. 6.) MCC’s due diligence for the three countries included a review of the probable environmental and social impact of projects that met its economic analysis and other criteria. For projects that it deemed likely to cause adverse environmental and social impact, MCC required impact assessments or environmental analyses, including an impact management plan. required. For example, MCC assigned all projects in the Madagascar proposal to category C, because MCC determined that these projects were not likely to have adverse environmental and social impact. In contrast, MCC assigned infrastructure projects in Cape Verde’s and Honduras’s proposals to category A or B. For example, the highway expansion project in Honduras was assigned category A, because it involves the clearing of rights-of-way that will require compensation for more than 200 affected people. The port expansion project in the Cape Verde proposal was also assigned to category A, because it entails dredging and construction in and around an existing port. Additionally, MCC assessed whether environmental impact assessments had been conducted for category A or B projects. For projects lacking environmental impact assessments, MCC conditioned project funding on the completion of such assessments as well as on the development of mitigation plans in consultation with affected groups. For projects that other organizations had assessed for environmental impact, MCC used a U.S. agency or a contractor to evaluate the assessment and determine its adequacy. For example, in Honduras, another donor had already conducted the environmental impact assessment for the highway segments proposed for MCC funding. As part of MCC due diligence, the U.S. Army Corps of Engineers reviewed these assessments and made recommendations that were incorporated in the existing assessments. However, in Cape Verde, the MCC contractor conducting due diligence found another organization’s assessment of the port project’s environmental impact to be inadequate. As a result, MCC required a new environmental assessment, along with plans to manage adverse impact, as a precondition for funding the project. MCC has allocated funds for this analysis in the compact budget for Cape Verde. MCC’s 2004 proposal development guidelines do not address projects’ environmental and social impact. In assessing environmental impact, MCC applied criteria from the Millennium Challenge Act of 2003, which prohibits MCC from funding projects that are “likely to cause a significant environmental, health, or safety hazard.” MCC also used criteria laid out in its March 2005 interim Environmental Guidelines, which state that MCC will not fund projects that lack the appropriate screening or analysis for environmental impact.The guidance also states that the country has primary responsibility for conducting and monitoring environmental assessments. (See fig. 7.) In keeping with its emphasis on sustainable progress, MCC’s due diligence examined whether the three countries’ proposed projects could be sustained after their compacts expired. In assessing project sustainability, MCC reviewed each country’s policy and regulatory environment and commitment to reforms and financing of future maintenance costs; it also reviewed expected results from MCC-funded projects. In addition, MCC considered the countries’ institutional capacity to sustain proposed projects as well as other donors’ roles in strengthening countries’ capacity. institute such reforms. In the case of Madagascar, MCC reviewed policy reforms in land management and financial sectors that would benefit MCC-funded activities. USAID- nd MCC-fnded business center re colocted in Antirabe, Masr. The USAID-fnded progrsupportrmers’ ccess to mrket nd provide linkge to lrge business. The MCC-fnded progrm will focus on technicassnce to frmer nd mke them creditworthy. Thee frmereek help from the USAID project lter. The USAID project i expected to re it exiting client lit with nd introdce thee client to the MCC progrm. Althogh USAID nd MCC project re conidered complementry, there i perception tht MCC i replcing USAID. Thi perception i trengthened y the fct tht USAID i loing abt one-third of it employee in Masr, ccording to U.S. offici with whom we poke. had limited institutional capacity to achieve the project objectives of increasing land security and promoting financial intermediation to increase rural savings and extension of credit. To build the government’s capacity, MCC budgeted funding for (1) staff recruitment and training at Madagascar’s land management department to support the land security project and (2) finance, management, and production training for rural producers and microfinance institutions to support the financial intermediation project. In addition, MCC considered the role of other donors in strengthening the countries’ institutional capacity. For example, while assessing road projects, MCC considered the World Bank’s road sector initiative, which includes institutional capacity- building in Cape Verde. In Honduras, MCC considered World Bank- funded and Inter-American Development Bank–funded programs intended to develop the transportation ministry’s management capacity and maintenance contracting capacity. Project impact. MCC also relied on the assumptions used in its analysis of projects’ economic impact to determine the sustainability of agricultural sector projects for all three countries we reviewed. For example, MCC expects that as a result of MCC-funded technical assistance or credit to farmers and rural entrepreneurs, recipients will be able to generate enough income to afford these services by paying fees to providers. In cases where sustainability depends on achieving MCC’s projected impact, the soundness of MCC’s economic analysis, discussed in the next section of this report, will also be an important factor. In its sustainability assessments, MCC generally adhered to guidance issued in 2005, rather than to guidance from 2004. MCC’s May 2004 proposal development guidance included a general requirement for a strategy to sustain progress after the compact’s expiration. MCC’s November 2005 MCC Compact Assessment and Approval Guidelines did not require such a strategy, but the guidelines required identification of factors contributing to institutional and financial sustainability for each project. (See fig. 8.) During its due diligence reviews for Madagascar, Cape Verde, and Honduras, MCC analyzed proposed projects’ probable impact on the country’s economic growth and poverty reduction. This analysis was intended both to assess whether these projects would achieve MCC’s goals and to provide a basis for monitoring their progress and evaluating their impact. To predict each project’s impact on economic growth, MCC calculated an economic rate of return (ERR) —that is, the expected annual average return to the country’s firms, individuals, or sectors for each dollar that MCC spends on the project. In calculating projects’ ERRs, MCC used an economic model that includes the following elements (see fig. 9): MCC’s annual expenditures for the project, the project’s annual benefits to the country, predicted net benefits of the project, and the projected ERR. ERRs for the proposed projects in Madagascar, Cape Verde, and Honduras varied considerably, ranging from 116 percent for Madagascar’s land titling project to 10 percent for a watershed management and agriculture support project in Cape Verde; the median ERR for projects in the three countries was 14 percent. However, because MCC analyzed the Madagascar and Honduras proposals before publishing its first set of economic analysis guidelines, the ERRs did not significantly affect those countries’ overall choice of projects. (See fig. 10.) MCC’s finding of a low ERR for part of Cape Verde’s watershed management and agriculture support project resulted in the country’s dropping an irrigation activity on one island. We found limitations in the assumptions and data that MCC used in its analyses as well as in the countries’ involvement in the analyses. These limitations may negatively affect the accuracy of the analyses and the countries’ understanding of the analysis process, respectively. Assumptions and data. Some of the assumptions and data that MCC used in its analyses do not fully reflect the countries’ socioeconomic environment. As a result, MCC cannot be assured that the projects it approved, partly on the basis of these analyses, will achieve the compacts’ goals. For example, in calculating the ERR for the Madagascar land titling project, MCC assumed that local small farmers would use newly titled land as collateral for loans, invest the borrowed funds in agricultural activities, and benefit from the increased income from those activities We discussed this assumption with focus group participants in Madagascar, including MCA and U.S. government officials, senior Madagascar government officials representing ministries affected by the compact, and bank representatives. Our discussions suggested that MCC’s analysis may have been overly optimistic in assuming that small farmers would mortgage 40 percent of their newly titled land in Madagascar’s uncertain market, which does not offer insurance. While MCC assumed that 40 percent of the newly titled land will be collateralized, focus group participants believed this was overly optimistic. As a result, MCC may have calculated an unrealistically high ERR for the land titling project. Furthermore, the project may be more likely to benefit farmers with large, secure landholdings and investors from outside the farming community, rather than the local small farmers it was intended to help. Similarly, MCC’s economic model for a part of Madagascar’s finance project, the modernization of the National Savings Institution, may not accurately reflect the institution’s financial condition. MCC’s model for the institution’s modernization uses the institution’s April 2003 net profits (1.4 billion Malagasy francs) as a baseline for estimating the benefits of computerizing the bank. However, according to institution officials, 2003 is not a representative year for the bank, and it experienced unusually large net losses (13 billion Malagasy francs) after an economic crisis in 2002. As a result, MCC may have inaccurately estimated the project’s likely impact on the banking system. Country involvement. In the two countries we visited, country representatives were not closely involved in MCC’s economic analyses of the proposed projects. This constrained the contribution of the analysis process to enhancing country partnership, and stakeholders’ understanding of MCC’s economic analysis—including the data analyzed, assumptions used in the analysis, and the expected outcomes. According to country officials in Madagascar and Cape Verde, MCC developed the economic models and selected data with little assistance from country representatives. According to the Cape Verde proposal team, they provided some data but did not participate in the actual analysis and did not have a clear understanding of the process and the criteria MCC used to assess proposed projects. We discussed these issues with MCC officials, who told us that the countries’ degree of involvement depended on the capability and willingness of the countries’ proposal development team to actively participate in the analyses. MCC gave Madagascar, Cape Verde, and Honduras authority to propose and develop the implementation structures that they will use to manage compacts, although MCC retains authority over a number of management decisions. To govern their programs, the countries have created management units under the direction of a steering committee, but they have had difficulty in filling key positions. The countries also have established structures for ensuring fiscal accountability and for managing procurements that appear to be effective; however, implementation is still at a very early stage, and some required elements of these structures are not yet in place. Finally, the countries have established frameworks for monitoring and evaluating the performance of MCC projects. However, the frameworks have weaknesses related to the inadequacy of baseline data, linkage of monitoring plans with economic models, methods of addressing uncertainty in achieving stated targets, and the timeliness of research designs for randomized controlled trials. These weaknesses may limit MCC’s ability to track and account for program results. Consistent with MCC guidance, each of the three countries—Madagascar, Cape Verde, and Honduras—proposed its own procurement system, subject to MCC review and approval during due diligence. Although the approved systems have characteristics that we have found typical of effective procurement systems, their effectiveness has not yet been tested by many procurements. In addition, some of the staff and procedures needed to implement the systems are not yet in place at MCC headquarters or in compact countries. To capitalize on existing country knowledge and experience, MCC has given the countries flexibility to choose their procurement agents and standards, with the choice subject to MCC approval during due diligence. Madagascar and Cape Verde are using private and public sector procurement agents, respectively, and Honduras is using a combination of public and private sector agents. (See app. IV.) In contrast to donors such as the World Bank, MCC also allowed the countries to propose their own standards for managing procurements. (See app. IV for details of the countries’ procurement agents and standards.) MCC required the countries to adhere to “procurement principles” that include equal access to procurements, competition for awards, and transparency of the process. Madagascar, Cape Verde, and Honduras have used either existing World Bank standards or their own laws to govern MCC procurements. In its procurement agreement with each country, MCC included modifications to the country’s selected standards to reconcile them with U.S. law and MCC principles. For example, MCC required that countries not include preferences for domestic suppliers in solicitations paid for with MCC funds. Although MCC’s agreements with the three countries give the countries a number of authorities over procurements, MCC retains certain approval rights. MCC’s fiscal accountability framework describes procurement as one of the highest-risk areas of fiscal accountability. MCC’s compact, disbursement agreement, and procurement agreement with each of the countries describe the relationship, roles, and authority of MCC, the procurement agent, and the compact country (see app. IV for details). Although these agreements have some common elements, each agreement is unique to the individual country. To determine when MCC approval of individual actions is appropriate, MCC included review thresholds in the procurement agreements keyed to procurement size and methods. Above cost thresholds, which vary among countries, MCC must approve items such as the procurement method, terms of reference, and selection. Below thresholds, the compact country may conduct procurements in keeping with the procurement plan without additional MCC oversight. These thresholds are a risk management tool that maximizes MCC control for larger transactions but leaves discretion to the compact country for smaller transactions. Although the three countries’ procurement systems vary, each has characteristics that we have previously identified as typical of effective international procurement systems. These characteristics are similar to the principles, such as equal access, competition, and transparency, that MCC applies in its review of the systems during due diligence. However, MCA officials in Madagascar and Cape Verde told us that they had completed few procurements that would test the systems in practice. Furthermore, MCC headquarters has not yet finished hiring its procurement staff, and procurement systems in Madagascar and Cape Verde are not yet fully established. Incomplete staffing. As of May 2006, MCC headquarters had hired its senior director of procurement but had not yet hired five director-level procurement staff in its Accountability Department. According to MCC officials, the function of these vacant positions is currently being filled by six intermittent personal service contractors, and MCC has offered one of these contractors a full-time position. During our site visits in January and February 2006, staff in Madagascar and Cape Verde reported that the time frames for MCC review of procurements had been satisfactory. However, if MCC staffing does not increase as the countries submit more procurement decisions for approval, MCC may have difficulty in conducting timely reviews of these decisions. Incomplete systems. During our site visits, we found that some elements of the procurement systems documented in MCC’s agreements with Madagascar and Cape Verde were not yet in place or had not functioned smoothly. Neither country had yet established the procurement bid protest body required by MCC or put in place automated systems for procurement tracking and management, although both reported plans to do so in 2006. Madagascar also had not established a process for reviewing contractor change order requests. During our visit to Madagascar, a senior MCC official discovered that the country’s procurement plan for months 2 through 4 did not reflect current project work plans. As a result, the procurement agent was preparing for procurements that were no longer needed. Members of the Madagascar management team told us that they would establish new procedures to coordinate future work plan changes with the procurement agent. In Cape Verde, the procurement review commission lacked office space and was concerned about being able to handle the number of reviews required of it once procurements begin in earnest. The commission is comprised of members who are fully employed elsewhere and are not permitted to delegate their work. The members have worked nights and weekends or have negotiated with their supervisors to be released from their other jobs to perform the commission’s work. According to MCC, subsequent to our site visit, Cape Verde has taken actions to mitigate the risks to the efficient operation of the commission. Both Madagascar and Cape Verde reported difficulties with an MCC requirement that documents be in both English and the local language. Cape Verdean procurement review commission members are not required to speak English but are expected to review documents prepared in English. Cape Verde was considering hiring a full-time translator to address this need. In Madagascar, the translation of a French document prepared by the implementer of the finance project and requested by MCC delayed the approval of needed project procurement. Each of the three countries’ (Madagascar, Cape Verde, and Honduras) programs includes a monitoring and evaluation framework that includes plans for data collection, data quality reviews, analysis, and interim and final reporting of results. We found several weaknesses in the monitoring and evaluation frameworks that could affect MCC’s ability to track and account for program results. Each of the three countries’ programs includes detailed plans for monitoring and evaluating program results. MCC approved Madagascar’s and Cape Verde’s plans in November 2005 and April 2006, respectively, while Honduras’s plan existed in April as a detailed draft but had not yet been approved. According to MCC officials, Honduras’s plan is not final pending the staffing of the monitoring and evaluation director position to ensure country understanding and buy-in. In accordance with MCC guidance, the countries’ plans include separate components for monitoring and evaluation: The monitoring component includes, among other things, key indicators that are linked as closely as possible to the variables identified in the economic analysis of the country’s proposed projects. These indicators are to be used throughout implementation to assess whether the program is likely to achieve the desired results. The monitoring component also identifies baseline and target values for each indicator and includes plans for periodic performance reports and data quality reviews. MCC guidelines state that monitoring can be on select indicators to minimize reporting requirements. The evaluation component identifies, among other things, the methodology that will be used to assess the program’s impact, such as randomized controlled trials, and describes plans for collecting baseline, interim, and final data on program results. Countries’ monitoring and evaluation plans use, to varying degrees, the economic analysis of the proposed projects to identify indicators for monitoring progress toward project objectives, calculate targets for each indicator, and evaluate the achievement of compact goals. The economic relationships specified in the models, such as the relationship between improved infrastructure and farm output, provide a basis for tracking project success. The monitoring framework also includes setting target values for indicators. Figure 13 illustrates indicators at various levels for a rural development project in Honduras. Although the specifics of the countries’ plans vary, the monitoring component of each plan calls for the country’s MCA to submit periodic performance reports and data quality assessments to MCC. Performance reports will include quarterly assessments to alert the countries and MCC to any problems, periodic audits that analyze performance at all compact levels, and annual reports that consolidate the quarterly reports and recommend adjustments. Most performance data will be gathered by project implementers, either country government employees or contractors; the plans allow for contracting with other entities to prepare the reports. In addition, each plan calls for third-party data quality reviews over the course of the compact. For example, in Madagascar, data quality assessments are planned to occur every 6 months during the first year and annually thereafter. In reviewing the frameworks for monitoring and evaluation in the three countries, we identified several challenges that MCC faces in ensuring accountability for results. These challenges include (1) ensuring the availability and quality of baseline data, (2) establishing clear links between the economic model and the monitoring and evaluation framework, (3) accounting for the degree of uncertainty in expected outcomes, and (4) using randomized controlled trials in compact countries. Baseline data are essential to measuring the results of MCC-funded compact projects. Although MCC has taken steps to collect baseline data for monitoring and evaluation, problems with data availability and quality may lead to challenges in measuring the progress and impact of the countries’ projects over time. MCC officials told us that they worked with their country counterparts to set up a Management Information System that can meet the requirements for collecting performance data. In addition, MCC evaluated the technical capabilities of the country staff and the information system the country proposes to use for data management purposes. Finally, MCC budgeted funding for surveys in Madagascar and Cape Verde to collect baseline data when it was not available. However, we found that some of the baseline data in the countries’ monitoring plans were not complete, and that some of the data MCC collected were not reliable. Baseline data availability. In some instances, the countries’ monitoring and evaluation plans lack complete baseline data against which to measure progress. For example, two activity and final project indicators in Madagascar’s plan—“volume of production covered by warehouse receipts in zones” and “volume of microfinance institution lending in the zones”—currently lack baselines because the intervention zones have not yet been selected. Moreover, although the collection of performance data is closely linked to project implementation, Madagascar’s plan contains no intermediate outcome indicators and target values, thereby making it difficult to effectively track project progress. (See fig. 13 for an example of the role of intermediate outcome indicators in the monitoring structure of Honduras.) Baseline data quality. MCC may face challenges in ensuring the quality of the baseline data that it uses to monitor and evaluate program impact and, as a result, may have difficulty in accurately measuring the impact of compact projects. MCC officials told us that it has been difficult to obtain accurate and reliable baseline data against which to measure program results. In some countries, MCC has funded surveys to obtain the needed baseline data. However, even with the additional resources provided by MCC, obtaining baseline data has been a challenge. For example, we found significant data quality problems associated with one of three surveys that MCC funded to collect baseline data in Madagascar. Our interviews with Madagascar and USAID officials who oversaw the survey revealed that the survey results, which were used to estimate average land values, are flawed in that they do not reflect recent significant changes in Madagascar’s currency. Madagascar’s compact goal is to increase household income, as measured by the percentage of increase in land values. Because of the survey error, the land value estimates may not be sufficiently reliable to evaluate the project impact and the compact as a whole. Linking the indicators used to monitor and evaluate progress to the data and assumptions used in MCC’s due diligence economic analyses will also present a challenge. In reviewing the draft plan for Honduras, we found consistent linkages between the indicators for monitoring and evaluation and the variables and assumptions used in the economic model. However, in the plans for Madagascar and Cape Verde—the first two plans that MCC approved—we found instances where MCC did not sufficiently link the monitoring plans to the economic models, which may hamper its ability to effectively measure project results. For example: After signing its compact with Cape Verde, MCC changed the interim targets for seven indicators. In two cases, neither MCC nor Cape Verde was able to identify the methodology used to select the indicators in the monitoring and evaluation plan. According to MCC officials, they decided that the assumptions in the economic analysis were a poor basis for constructing the monitoring indicators and, therefore, chose other indicators and estimated the targets. According to MCC officials, the inability to identify the methodology, in conjunction with updated baselines and revised work plans in the country resulted in MCC’s and Cape Verde’s agreeing to reduce the interim targets that had been established. MCC’s economic analysis for Madagascar’s Land Tenure Project, which is approximately one-third of the compact budget, did not identify the expected benefits from the separate project activities. Therefore, we could not track the linkage between the activities in the model to those in the monitoring and evaluation plan. MCC’s approved monitoring and evaluation plan does not include tracking the results of two finance project activities—modernizing the National Savings Institution and opening bank branches—although they were included in economic analysis calculations during due diligence. According to an MCC official, these two finance project activities will be tracked at a higher level of aggregation—the finance project level—and monitored by tracking the number and value of new accounts. However, this approach may not adequately capture the outputs and benefits from the institution’s modernization and could potentially confound the effect of one activity (modernizing the institution) with that of other activities. For example, while an increase in the number and value of new accounts could result from the two finance project activities, it could also result from an overall increase in savings if customers invest in government savings bonds, issued recently and prior to the start of the MCC compact. Although the countries’ monitoring and evaluation plans acknowledge the uncertainty of achieving indicator target values, MCC project monitoring does not adequately address (1) the effect of potential variations in uncertainty on the range of acceptable target values or (2) the plausibility of target values. As a result, some targets specified in the countries’ monitoring plans may not be achieved. MCC disbursement agreements include as a condition precedent that, if an indicator value observed during compact implementation does not fall within 10 percent of the agreed-on target values, MCC may withhold disbursements. MCC applies this 10 percent margin to all projects, regardless of type (e.g., agriculture or infrastructure). However, our analysis suggests that several factors could cause indicator values for many projects to fall outside the 10 percent range. External factors. Uncertainty associated with external factors varies by country and project. For example, according to previous GAO work, external factors that could affect project implementation might include political instability, the lack of commitment of political leaders to necessary reforms, the magnitude of assistance from other bilateral and multilateral donors, weather conditions that affect crop yields, and the instability of international markets. These factors could cause indicator values to fall outside the 10 percent range used across all countries and projects. Time factor. The 10 percent range also does not account for the increase in the uncertainty of targets over time—for example, target values for year 5 of a compact are likely to be less precise than those for year 1. Therefore, to the extent that MCC bases its disbursement decisions on results falling within a common range, it may not fully account for variations in uncertainty across projects and over time. According to MCC guidelines, the economic analyses and monitoring and evaluation plans should, as much as possible, be clearly linked. However, limitations in (1) the economic analyses due to problems with data quality, assumptions, and lack of country involvement and (2) the consistency between the economic analyses and the monitoring and evaluation plans constrain MCC’s ability to set plausible targets. If targets are overly optimistic, countries may fail to reach them and MCC may not be justified in halting disbursements because the countries failed due to unattainable targets. Conversely, setting too conservative a target may not prompt the country to fully utilize MCC resources. A lack of plausible targets may lead to MCC’s making ad hoc decisions regarding the consequences of missing targets and applying judgment subjectively and inconsistently in setting or modifying targets. MCC officials told us that they would use a missed target as a cue to discuss with the country’s MCA its reasons for missing the target, and, on the basis of those discussions, they would determine whether to use their authority to withhold funding. According to MCC officials, senior management approval would be needed to significantly modify targets; however, MCC currently has no policy or documentation that defines a “significant modification” of targets, especially for targets used as conditions precedent to disbursements. MCC has retained research organizations to help the countries evaluate program impact using, as appropriate, randomized controlled trials, but MCC’s involvement of these organizations after project implementation begins may limit their ability to evaluate impact accurately. These organizations’ scope of work may include training MCC and compact country staff; designing the trials and data collection; and proposing appropriate methodologies for, and analyzing results from, impact evaluations. According to MCC monitoring and evaluation officials, MCC has begun designing impact evaluations by identifying those program components that can and cannot be evaluated using randomized controlled trials, which MCC has indicated are its preferred method of impact evaluation. According to MCC officials, MCC considers evaluations using randomized controlled trials as “rigorous” and evaluations using other methodologies as “standard.” MCC has hired an independent consultant experienced in impact evaluations to work with compact countries to assess the appropriateness of using randomized trials to evaluate MCC’s projects. When these assessments are completed, the five research organizations will be invited to compete to conduct randomized trials after compact implementation begins. However, at that point, the organizations will not have had an opportunity to assess the design of the countries’ evaluation strategy, including the adequacy and reliability of the baseline data. Without the involvement of these organizations before implementation of the relevant project(s) begins, MCC may not be able to ensure that they have the necessary data and have established appropriate research designs for their work. MCC officials told us that they thus far had not involved the five organizations because rigorous evaluations were turning out not to be feasible in some cases, or because the tasks were not large enough to warrant the use of the five research organizations. MCC continues to mature and evolve as an institution, taking on the ambitious task of creating new, country-managed organizations while developing processes to oversee what are expected to be relatively large amounts of foreign assistance. Toward that end, MCC has taken positive steps with regard to establishing policies and procedures for MCA organizations. However, it has taken time to complete the numerous agreements necessary for compacts to enter into force. This could continue to present challenges, given that MCC is working simultaneously with a number of nations to develop and implement compacts. As MCC moves forward, partnering with countries to develop well-founded economic assumptions will be crucial to establishing a foundation for the work of MCC and its partners. Furthermore, holding countries accountable for results requires, to the extent practical and cost-effective: collecting reliable and accurate baseline data, linking economic analyses to monitoring plans, addressing the uncertainty associated with program results, and ensuring the timely development of the research design for randomized controlled trials. Because of the central role of reliable economic analyses and the importance of partnering with countries in achieving MCC goals and ensuring accountability for MCC programs, we recommend that the Chief Executive Officer of MCC take the following two steps: Ensure that MCC officials, in partnership with country representatives, perform economic analyses that more fully reflect the countries’ socioeconomic environment and are better understood by country public and private sector representatives. To the extent practical and cost-effective, improve MCC’s monitoring obtaining more accurate and reliable baseline data needed to permit tracking progress during compact implementation; ensuring a clear linkage between MCC’s economic analyses and monitoring and evaluation frameworks; developing policies, procedures, and criteria for establishing targets and for adjusting those targets if unforeseen events affect outcomes; and taking steps to ensure the timely development of the needed research design for randomized controlled trials, if they are undertaken, prior to project implementation. We received written comments on a draft of this report from MCC and the Department of State. In commenting on a draft of this report, MCC generally agreed with our findings, conclusions, and recommendations. MCC noted that our discussion of the evolving guidance provided to eligible countries in 2004 and 2005 was a result of the complex process of engaging with eligible countries while simultaneously developing policies and procedures. MCC stated that this criticism should not be valid beyond the initial instances covered in this report. We recognize that MCC was simultaneously addressing a number of issues during this period, but we felt it was important to discuss the evolving nature of guidance to provide a balanced perspective regarding the process eligible countries had to follow to sign compacts. MCC also commented (1) that our characterization of data quality issues in Madagascar cited only a single survey and (2) that it differed with us on the appropriate level of aggregation in linking economic models and monitoring and evaluation plans. We note that there was one other instance of poor data quality in Madagascar, but that we focused on the Agricultural Productivity Survey because of the importance of accurately tracking land values to monitor results. We agree that disaggregation may not always be feasible, but note that aggregation poses some challenges that could limit the effectiveness of monitoring and evaluation. We reprinted MCC’s comments, with our responses, in appendix V. We also incorporated technical comments from MCC in our report where appropriate. State commented that some of our findings reflect minor or transitory problems and provided specific observations regarding MCC’s evolving guidance, the assumptions used in MCC’s economic models, country participation, staffing delays, fiscal accountability structures, and the use of randomized controlled trials. State noted that MCC’s guidance could be expected to evolve, given the newness of the organization, and that informal guidance from MCC was always available to eligible countries. We agree that guidance could be expected to evolve, but we sought to provide a balanced perspective by noting instances where MCC’s verbal guidance may not have been sufficient to assist countries in submitting proposals that met MCC’s criteria. State questioned the findings from our Madagascar focus groups and our finding that MCC conducted economic analyses with limited country involvement. Our focus groups resulted in discussions of the assumptions used in the Madagascar economic analysis with a broad range of country stakeholders representing U.S. and Malagasy agencies and organizations involved in implementing the compact. Furthermore, MCC agreed in its comments that, in some cases, the level of country engagement on economic analysis could be improved. MCC outlined specific steps that it had taken to increase country involvement in economic analyses. Regarding staffing delays and fiscal accountability structures, State commented that we offered no suggestions to MCC for things they could have done differently. We added additional material to the report to elaborate on the steps MCC has taken to reduce delays in staffing key positions. In regard to fiscal accountability, we agree that different countries will differ in their maturity of internal control. However, evaluating maturity is key to properly assessing risk and establishing effective oversight mechanisms. Finally, State disagrees with what it terms our “reliance” on randomized controlled trials to measure success. This comment misconstrues our findings. We did not rely on or advocate this methodology, but rather are commenting on MCC’s use of randomized controlled trials as its preferred method of impact evaluation. We have reprinted State’s comments, with our responses, in appendix VI. We are sending copies of this report to interested congressional committees as well as the Secretary of State, the Secretary of the Treasury, the CEO of MCC, and the Administrator of USAID. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact David Gootnick at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. The Millennium Challenge Corporation (MCC) has taken a number of steps to address our April 2005 recommendations regarding its strategic planning, internal controls, and human capital and governance policies, although some aspects of its organizational structure are not yet complete. MCC has prepared a strategic plan for fiscal years 2006 through 2011, and an annual performance plan for fiscal year 2006. MCC also has strengthened its internal controls and taken steps to implement some fiscal accountability mechanisms, established an audit committee within its governing board, and completed several required audits and reviews. However, MCC has not documented all financial control activities and continues to face risks from the poor interfacing of its systems with those of the Department of the Interior’s National Business Center (NBC). MCC has reassessed its staffing model, developed a plan for recruitment, and implemented an improved performance management system; but, it does not systematically track the use of staff resources to verify its human capital model. Finally, MCC has approved a corporate governance policy and taken steps to improve board involvement in planning, management, and communication; but it has not yet fully addressed risk management for the corporation. Consistent with our recommendation to enhance corporate accountability, MCC completed a strategic plan, approved by the Office of Management and Budget (OMB) in November 2005. In April 2006, MCC completed an annual performance plan that provides goals and benchmarks for assessing its performance in fiscal year 2006. This annual performance plan will enable MCC to report in the future on its progress in meeting its goals. As part of the annual performance plan process, MCC also has developed goals and benchmarks for its individual departments that support the attainment of the corporate goals identified in the strategic plan. (See table 4.) In response to our April 2005 recommendations, MCC has made significant progress in establishing internal controls over program and administrative operations at both the MCC headquarters and compact levels. MCC has made progress in each of the five components of internal controls discussed in our April 2005 testimony: (1) control environment, (2) risk assessment, (3) control activities, (4) monitoring, and (5) reporting. Internal control environment. MCC has initiated several measures to establish a positive internal control environment, including documenting its organizational structure for the Administration and Finance and Accountability structures. The Fiscal Accountability area, which is a component of the Accountability structure, currently consists of a managing director and several directors; however, other positions to support the fiscal oversight of MCC compacts have yet to be filled. A formal ethics program has also been established for MCC headquarters. Risk assessment. MCC is developing a process for assessing risks facing the corporation and its programs. To this end, MCC has hired a third- party consulting firm to support the implementation of processes, based on OMB A-123, Management’s Responsibility for Internal Control, December 2004, criteria, to address risks associated with domestic and foreign operations. Control activities. MCC has instituted several control activities to reduce risk. MCC has submitted its Strategic Plan under the Government Performance and Results Act (GPRA) of 1993 and has completed required audits and reviews, such as those required under the Federal Managers Financial Integrity Act (FMFIA) of 1982 and the Federal Information Security Management Act (FISMA) of 2002. MCC is in the process of addressing the various material weaknesses, reportable conditions, findings, and recommendations identified in the audits. For example, MCC continues to address the formal documentation of control activities for the financial reporting process at MCC headquarters. MCC also contracted with a third-party service- provider, NBC, to maintain its accounting system and the recommended Statement on Auditing Standards No. 70 (SAS 70) review was performed; however, MCC is still addressing issues reported as a result of the review, such as the need to address manual processes that present inherent risks associated with accounting systems and related processes. For example, MCC forwards daily to NBC a package containing source documentation that is used by NBC to record transactions to the general ledger. Similarly, information for MCC’s travel expenses is prepared at MCC headquarters office and daily communicated to NBC through manual processes. MCC is aware of the need to address these issues and is working with NBC to mitigate or eliminate the manual processes. Monitoring. MCC has taken steps to ensure ongoing monitoring and periodic testing of control activities. MCC’s investment committee embodies functions of monitoring and testing and operating as an integral part of MCC’s internal control program by overseeing the program units’ compliance with both the procedural and substantive elements required by its internal processes. Also, MCC conducted its first comprehensive survey of internal controls, performed by outside consultants, in conjunction with its annual audit. In addition, MCC has formed formal review panels to monitor the progress of addressing findings from internal and external reviews. For example, consistent with OMB guidance, MCC formed an FMFIA Management Review Panel to assess the results of the internal control survey along with the findings of MCC’s independent financial auditors. Similarly, MCC implemented a specific procedure to address recommendations from reviews and audits performed by its Inspector General (IG). Reporting. MCC has made progress in establishing a process for assessing and reporting on the operating effectiveness of its internal controls. MCC established a formal board-level audit committee whose responsibilities include (1) financial controls; (2) the integrity of the reporting process; and (3) performance of the independent audit process. In addition, MCC’s FMFIA Management Review Panel assessed the results of the internal control survey, along with the findings of MCC’s independent financial auditors. The panel identified four material weaknesses and actions that MCC will be taking in future months to resolve them, which the acting Chief Executive Officer certified on November 7, 2005. MCC officials told us that the development of new internal control policies and procedures and the revision of those already in place is a continuing process as MCC continues to mature as an organization. Table 5 summarizes MCC’s progress in addressing our April 2005 recommendations. MCC has taken steps and is continuing to further develop its human capital systems by (1) assessing its staffing needs; (2) improving its recruitment, development, and retention systems; and (3) implementing a performance management system linking compensation to individual contributions toward corporate goals. However, despite having plans to increase its staff by an additional 38 percent between May and September 2006, MCC does not systematically assess its staffing needs, has not developed a human capital plan, and has not yet fully implemented its improved performance management system, as follows: Staffing. Although MCC has completed an assessment of its human capital needs since our April 2005 recommendation, it does not systematically track the use of staff time on an ongoing basis. MCC’s updated assessment of its human capital needs shows that it plans to increase its staffing from 218 staff, as of May 2006, to approximately 300 staff, as of September 2006. MCC also has created an organization chart that includes the specific approved positions for each department under the new 300-person staffing model, and it is hiring for many of these positions using limited-term appointments to provide greater flexibility in filling future needs. According to MCC officials, MCC made its case to OMB for increasing staffing to 300 persons on the basis of an analysis of MCC staffs’ recollection of the amount of time they spent developing the Georgia compact, which was thought to have been the most complex compact development process to date. However, MCC officials felt that this analysis may not have fully captured the amount of time spent by some departments in developing the compact. Retrospective analysis was necessary because MCC does not track employees’ time on mission-related projects on an ongoing basis. Without such data, MCC management is not able to systematically assess the staffing requirements needed to carry out MCC’s mission and consistently align its human capital with its changing needs. Recruitment and retention. MCC has identified priorities and committed resources for recruitment and is developing a human capital plan to address retention and training. To support its effort to hire approximately 82 additional staff between May and September 2006, MCC has retained an outside consultant firm to work on-site and help with recruitment and has identified positions as first- or second-tier hiring priorities. MCC officials also told us that they are developing an overall human capital plan that will include planned activities and a time frame for identifying critical skills and competencies for MCC’s key positions. The officials stated that the human capital plan also will include a strategy for staff retention and will address staff training. Currently, MCC has developed procedures for providing employees with outside training. MCC intends to develop a comprehensive training plan following the completion of the human capital plan, a draft of which it expects to circulate to MCC senior staff for their review and comment by September 30, 2006. Performance management. In keeping with our recommendation, MCC has established a performance-based compensation framework. MCC provided us with documentation of its employee ratings process, showing that employee expectations and performance reviews were keyed to organizational goals. However, according to MCC officials, MCC did not incorporate the departmental performance plans for the year into the performance framework for 2006 until March 2006, as the annual performance plan neared completion. MCC is shifting from a calendar year to a fiscal year performance evaluation schedule to better align employee compensation with its annual corporate goals. MCC anticipates that its strategic plan, annual performance plan, department plans, and individual performance goals will be fully synchronized beginning in fiscal year 2007. (See table 6.) The MCC Board of Directors (MCC Board) has taken steps to define the scope of its corporate governance and oversight. The MCC Board has approved a corporate governance policy developed by MCC with the involvement of staff from MCC Board agencies. According to MCC officials, the policy incorporates guidance on governance matters provided by board members at their previous meetings, and the board participated in formulating MCC’s strategic plan before approving it in December 2005. The board has established a board-level audit committee and a charter for that committee. According to MCC officials, to address risk, MCC is recruiting for the position of risk specialist and has used a contractor to support MCC risk analysis. Finally, to improve communication with stakeholders in eligible countries, MCC has published and distributed updated guidelines for compact development and eligibility. MCC also has developed a series of open forums where input is sought from groups with an interest in MCC. (See table 7.) At the request of the Senate Committee on Foreign Relations, we examined the structures and procedures MCC has developed in consultation with compact countries to manage compacts. Specifically, our work focused on (1) the key areas that MCC examined in its due diligence assessments of proposals for Madagascar, Cape Verde, and Honduras, and the criteria that MCC used in these assessments, and (2) the form and adequacy of the implementation structures that MCC and compact countries have put in place for governance, procurement, fiscal accountability, and monitoring and evaluation. In addition, we reviewed MCC’s progress in responding to our April 2005 recommendations on its corporate management and accountability structures (see app. I). To accomplish our objectives, we reviewed MCC’s documentation of its processes and agreements, supplemented by interviews with MCC officials. We focused our review for objectives 1 and 2 primarily on the first three countries with signed compacts—Madagascar, Cape Verde, and Honduras. These countries’ compacts were the first to enter into force. To further our analysis for objectives 1 and 2, we also visited Cape Verde and Madagascar in January and February 2006. We selected Cape Verde and Madagascar for our site visits because they had advanced further than Honduras in filling key positions and beginning compact implementation. While in Cape Verde and Madagascar, we interviewed a number of MCC, Millennium Challenge Account (MCA), and government officials and visited project sites. To identify MCC’s evaluation criteria and process for evaluating eligible country proposals in due diligence, we reviewed MCC guidance and the record of MCC analysis contained in (1) MCC’s “due diligence books,” which are its internal records of how it assessed proposals submitted by Madagascar, Cape Verde, and Honduras, and (2) investment memos, which are MCC’s analyses based on due diligence and internal recommendations to its investment committee. These documents are restricted from public dissemination due to their sensitive nature, but MCC made them available to us for analysis. We have coordinated with MCC on describing the information from these books in general terms without disclosing sensitive information. We used MCC’s definition of the due diligence process as beginning with MCC’s opportunity memo and ending with the acceptance of the investment memo by MCC’s investment committee. Our review, therefore, may not capture some changes and decisions made by MCC or eligible countries during proposal development and compact negotiations. To evaluate MCC’s assessments of proposals’ consultative process, project coherence, environmental and social impact, and institutional and financial sustainability, we relied primarily on MCC’s data and analysis contained in the due diligence books and, to some extent, in the investment memos. We compared MCC’s analysis in these documents with criteria outlined in MCC’s guidance. We were able to perform only limited independent verification of the use and adequacy of these criteria during our site visits. With regard to its economic analyses, MCC also made available to us the spreadsheet models it used to develop the economic rate of return calculations that formed the basis for its evaluations of the suitability of country-proposed projects for MCC funding. We independently analyzed these spreadsheets and validated their logic and conclusions on the basis of a review of economic literature and practices. In addition, in Madagascar, we conducted a series of focus groups with country officials to assess the data and logic used by MCC in developing their economic analysis. To assess MCC’s compact implementation structures, we reviewed the compacts with Madagascar, Cape Verde, and Honduras and the supplemental agreements required for those compacts to enter into force. We supplemented this review with our site visits to Madagascar and Cape Verde. In all cases, our ability to analyze the adequacy of these structures was limited by their relative newness and limited use in actual implementation. We addressed the following four areas of MCC’s implementation structures: To determine the form of governance structures and key positions in these three countries, we reviewed the requirements of MCC compacts and supplemental agreements. We determined the progress of the country organizations in Madagascar, Cape Verde, and Honduras in filling these positions and establishing these structures by analyzing MCC’s reported staffing and status information. We independently assessed MCC’s progress in our site visits to Madagascar and Cape Verde. We discussed factors affecting the filling of these positions through discussions with MCC and compact country officials. To assess the adequacy of the countries’ fiscal accountability structures, we reviewed MCC’s overall fiscal accountability framework and the operations in place in Madagascar and Cape Verde. We assessed the adequacy of these structures according to the criteria contained in GAO’s Standards for Internal Control in the Federal Government. We assessed MCC’s and the countries’ implementation of these structures by using criteria in the Internal Controls Capability Maturity Continuum developed by the independent risk consulting company, Protiviti, Inc. We independently verified the existence of the structures described in the plan and discussed its strengths and weaknesses during the site visits. To assess MCC’s procurement structures, we reviewed the MCC fiscal accountability framework, and the implementing procurement documents for the first three compact countries. We then assessed the adequacy of MCC’s framework, using criteria identified in previous GAO reports on international procurement. To determine the status and factors affecting the implementation of this framework in Cape Verde and Madagascar, we interviewed compact country officials and obtained documentation of procurement procedures. To determine the form of monitoring and evaluation structures in the three countries with entry into force, we reviewed the requirements of MCC compacts and supplemental agreements. We assessed the status of the country organizations in Madagascar, Cape Verde, and Honduras to establish these structures by analyzing the staffing and status information provided to us by MCC. We independently assessed MCC’s progress in our site visits to Madagascar and Cape Verde. Additionally, we reviewed the scope of work of the independent U.S. evaluation contractors retained by MCC, and closely reviewed the monitoring and evaluation plan for Madagascar—the only plan approved by MCC prior to April 2006. We also reviewed the plan for Cape Verde and the draft plan for Honduras. We assessed the adequacy of the Madagascar plan against the criteria of data quality, and consistency with the economic model and logic identified in MCC’s due diligence review of projects. We also applied general principles of economic logic, such as treatment of uncertainty in data, to assess how uncertainty was incorporated into MCC’s monitoring and evaluation framework. To review MCC’s progress in responding to GAO’s April 2005 recommendations (see app. I), we examined MCC documents, such as its strategic plan, planning documents, policies, procedures, and human capital documents. In December 2005, MCC provided us with a letter outlining the steps that the corporation had taken in response to our recommendations. Using this as a basis for discussion, we held additional meetings with MCC officials and received additional documentation of MCC’s responses. We also reviewed the findings of the MCC IG analysis of the functions of the Corporation and met with the IG to determine the steps that MCC had taken in response to IG findings related to our recommendations. We conducted our review from June 2005 through May 2006 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Millennium Challenge Corporation letter dated July 7, 2006. 1. MCC stated that “MCC's mandate to engage directly with eligible countries shortly after its creation did not allow for the possibility of developing and vetting policies and procedures in advance of such engagement.” We recognize that MCC was simultaneously addressing a number of issues during its initial years of operation—selecting eligible countries, setting up MCC as an organization, and developing guidance and policies while also working with countries to finalize compacts. In the context of MCC as an evolving organization, we felt it was important to discuss the evolving nature of guidance to provide a balanced perspective regarding the process that eligible countries had to follow to sign compacts. 2. MCC stated that our concern regarding baseline data quality stems from one survey in Madagascar. However, we found issues with data reliability in Madagascar with both the Agricultural Productivity Survey and Household Survey. In the second survey, the error was smaller; therefore, we focused on the Agricultural Productivity Survey to illustrate the more significant example. The measurement of land values is very important for Madagascar's monitoring and evaluation plan. Both the compact-level goal of increasing household income and the program objective of increasing investment in rural Madagascar are measured in terms of land values. In each of the zones, the expected increase in household income is estimated at 5 percent of the average land value, and the expected increase in investment is estimated at 27 percent of the average land value. With a large error in baseline data, it will be difficult to accurately track progress toward the compact-level goal and program objective. 3. MCC noted that not every benefit can be isolated and measured during implementation, and that disaggregating may not be feasible. We recognize that not all outcomes of the economic analysis can be directly tracked with indicators at different levels of monitoring or for impact evaluation. There are trade-offs between cost and level of detail. However, as we note in this report, aggregation poses some challenges that could limit the effectiveness of monitoring and evaluation. The following are GAO’s comments on the Department of State letter dated July 10, 2006. 1. In referring to our discussion about MCC’s guidance on project coherence, State Department commented that it would be erroneous to indicate that countries were wholly unaware of the need to design their compact programs to target compact goals, and that informal guidance from MCC could have been received at any time. We agree that countries could have requested informal guidance from MCC. According to MCC, they did provide verbal guidance to countries regarding the need to link projects to compact goals. However, MCC rejected tourism and preschool education projects proposed by Honduras because they were not linked to the impediments to growth that emerged from the consultative process. MCC also rejected projects in Cape Verde because MCC’s due diligence did not indicate that these projects addressed key constraints in Cape Verde. Given that these two countries submitted projects that did not meet project coherence criteria, MCC’s verbal guidance may not have been sufficient to direct countries to submit proposals meeting these criteria. We recognize that MCC was simultaneously addressing a number of issues during its initial years of operation—selecting eligible countries, setting up MCC as an organization, and developing guidance and policies while also working with countries to finalize compacts. In the context of MCC as an evolving organization, we felt it was important to discuss the evolving nature of guidance to provide a balanced perspective regarding the process that eligible countries had to follow to sign compacts. 2. State noted that we did not detail the methodology and composition of our Madagascar focus groups. We summarized the composition of the focus groups in the letter of this report. We met with these 29 individuals, including representatives of MCA-Madagascar and Madagascar ministries, during the course of our week in Madagascar and reviewed with them the assumptions used in MCC’s economic model for its projects. As previously discussed in this report, we found that these officials were not closely involved in MCC’s economic analysis. According to MCC and Madagascar officials, an MCC economist met with four Madagascar representatives in Paris, France, during compact negotiations and reviewed the model with them at that time. (MCC officials did travel to Madagascar during due diligence to assess proposed projects.) When we discussed this with MCC officials, they stated that not traveling to Madagascar to develop the model was a mistake, and one that has not been repeated. 3. State commented that we do not rebut MCC’s point that countries’ degree of involvement depended on their capability and willingness to participate. (See comment 2.) In its letter, which is reprinted in appendix V, MCC agreed that in some cases, the level of country engagement on economic analysis could be improved. MCC also now requires that countries engage an economist on the core country team, provides examples of the economic rate of return analysis, and has MCC economists make early technical guidance trips to work with the core country team and the core country team economist. These procedures were not in place when MCC conducted the economic analysis for Madagascar, Cape Verde, or Honduras. By not thoroughly involving the countries in this analysis, MCC risks not meeting its stated principle of focusing on results. In MCC’s results- based framework, the economic model is the basis for determining performance targets against which funding is conditioned. MCA- Madagascar may find that the targets are unrealistic if the targets have not been thoroughly discussed with, and understood by, Madagascar officials. 4. State noted that the time for the in-country MCA organizations to hire staff compares favorably with general government hiring. Management unit officials are hired by a compact country’s MCA program steering committee, not the U.S. government. As such, it is not “general government hiring” and should not be compared with it. 5. State correctly observed that we did not offer MCC a recommended alternative to its taking the time to find good candidates for positions. We did not make a recommendation for addressing this issue because MCC has adopted a policy implementing the authority given it by section 609(g) of the Millennium Challenge Act of 2003 to make grants to facilitate compact development and implementation. MCC’s current policy includes a provision where, if certain conditions are met, it may fund an eligible country’s request for “management support payments” for salaries, rent, and equipment for the country’s team prior to compact signature. We have added a discussion of this policy in the letter of this report to clarify this point. 6. In referring to our discussion about the different maturity of internal control between Madagascar and Cape Verde, State pointed out that differences in the maturity of internal control between the countries are to be expected, given that some countries have historically more well-developed systems. We agree that different countries will differ in their maturity of internal control due to historical differences in the underlying systems and accountability infrastructures. The current maturity of internal control in each country, however, is a key consideration in assessing risk and establishing effective oversight mechanisms to deal with the unique risks in the key financial processes and activities of each country. 7. State referred to “GAO’s reliance on project evaluation criteria that make use of randomized controlled trials to measure success.” This comment misconstrues our findings on the use of randomized controlled trials. We do not rely on these criteria. We are commenting on MCC’s use of randomized controlled trials, which MCC has indicated are its preferred method of impact evaluation, and on the potential challenges to the use of this approach. State’s comments regarding cost-benefit analysis would be more appropriately addressed to MCC. 8. State correctly noted that we did not capture the change to one indicator for fiscal year 2006. We have updated the list of MCC indicators in footnote 8. 9. State suggested rewording MCC’s expected results to “MCC expects to permanently raise incomes.…” We did not use the word “permanently” because in the three compacts that we focused on for this report, it was not used to describe MCC’s expected results at the country level. In addition to the person named above, Phil Herr (Assistant Director), Claude Adrien, Cara Bauer, Gergana Danailova-Trainor, Jeanette Franzel, Keith Kronin, Reid Lowe, Norma Samuel, Mona Sehgal, and Michael Simon made key contributions to this report. Also, Tim DiNapoli, David Dornisch, Etana Finkler, Ernie Jackson, Debra Johnson, Bruce Kutnick, Janice Latimer, Charlotte Moore, and Celia Thomas provided technical assistance.
In January 2004, Congress established the Millennium Challenge Corporation (MCC) to administer the Millennium Challenge Account. MCC's mission is to reduce poverty by supporting sustainable, transformative economic growth in developing countries that create and maintain sound policy environments. MCC has received more than $4.2 billion in appropriations, and, as of May 2006, it had disbursed $22.4 million to four countries whose signed MCC compacts have entered into force. For the first three countries with compact entry into force--Madagascar, Cape Verde, and Honduras--GAO was requested to examine (1) key aspects that MCC reviewed, and the criteria it used, in its due diligence assessments; and (2) the structures that have been established for implementing the compacts. MCC undertook a wide range of activities in its due diligence, including five key aspects of the Madagascar, Cape Verde, and Honduras proposals: (1) countries' consultation with local groups in developing compact proposals, (2) projects' coherence with compact goals, (3) environmental and social impacts, (4) institutional and financial sustainability, and (5) impact on economic growth and poverty reduction. MCC based its assessments on an evolving set of criteria: early, general guidance to the countries followed by later, more specific guidance. MCC's analyses of the projects' economic impact were limited in that some of the assumptions and data used may not reflect country conditions. As a result, the projects selected on the basis of the analyses may not achieve compact goals. In the two countries we visited, Madagascar and Cape Verde, MCC conducted the analyses with limited country participation, which resulted in countries' having little understanding of the process. MCC and the three countries have made progress in establishing compact country structures for oversight and management, procurement, fiscal accountability, and monitoring and evaluation, although some of these structures are not yet complete. The oversight structures allow for country management with MCC review, but some organizations were not fully staffed for months after the compacts entered into force. Madagascar and Cape Verde have implemented fiscal accountability structures for MCC-funded projects, and established procurement structures with effective characteristics; however, these structures are still largely untested and some are still under development. Finally, MCC and the countries have established monitoring and evaluation frameworks to track and account for program results. However, limitations in the baseline data collected, linkage to economic analyses, methods of addressing uncertainty associated with program results, and the timely design of randomized controlled trials may constrain MCC's ability to monitor and evaluate program results.
The term preventive services refers to a range of services aimed at preventing and diagnosing serious heath conditions among adults and children, as well as managing health conditions through early treatment to prevent them from worsening. Generally, preventive services are intended for the following three purposes: Prevent a health condition from occurring at all. Immunizations to prevent diseases such as influenza or pneumonia qualify as this first type of preventive service, called primary prevention. Prevent or slow a condition’s progression to a more significant health condition by detecting a disease in its early stages. Mammograms to detect breast cancer and other screening tests to detect disease early are examples of this second type of preventive service, called secondary prevention. Prevent or slow a condition’s progression to a more significant health condition by minimizing the consequences of a disease. Services that help management of existing health conditions, such as diet or exercise counseling to manage obesity or medication to manage high blood pressure, are examples of this third type of preventive service, called tertiary prevention. Preventive services can help prevent or manage a number of serious health conditions, such as heart disease, diabetes, obesity, and cancer. For example, heart disease and stroke are leading causes of death and disability in the United States, and the risk of developing these conditions can be substantially reduced if high blood pressure and cholesterol— which can develop in children as well as adults—are detected early and managed through diet, exercise, medication, or a combination. Similarly, diabetes is a leading cause of blindness, renal disease, and amputation, and also contributes to heart disease. Early diagnosis and management of diabetes, by controlling levels of blood glucose, blood pressure, and cholesterol, can reduce the risk of these and other diabetes complications. Finally, the importance of obesity as a health problem for both children and adults in the United States is increasingly apparent. Obesity is associated with an increased risk of many other serious conditions, including heart disease, stroke, diabetes, and several types of cancer. Overweight and obese children are at risk of health problems during their youth, such as diabetes, and are more likely than other children to become obese adults. Intensive counseling about diet, exercise, or both can promote sustained weight loss for obese adults. The federal government has established national health objectives and goals to monitor the health of the U.S. population, and several reflect the importance of preventive services. Healthy People 2010, coordinated by the Office of Disease Prevention and Health Promotion within HHS, is a statement of national health objectives designed to identify the most significant preventable threats to health and to establish national goals to reduce these threats to certain target levels. Some of the national goals established through Healthy People 2010 include reducing the proportion of children and adults who are obese, reducing the proportion of adults with high blood pressure and high blood cholesterol, reducing the overall rate of diabetes and increasing the proportion of adults with diabetes whose condition has been diagnosed, and increasing the proportion of children and adults who receive recommended preventive screening tests and immunizations. Recent reviews of progress toward these goals, however, in some cases show no progress or even movement away from certain goals, underscoring the importance of continued attention to prevention. Under federal law, state Medicaid programs generally must cover EPSDT services for children under age 21. A key component of EPSDT services is that it entitles children to coverage of well-child check ups, which may target health conditions for which growing children are at risk, such as obesity. An EPSDT well-child check up must include a comprehensive health and developmental history, a comprehensive unclothed physical exam, appropriate immunizations and laboratory tests, and health education. EPSDT well-child check ups may be a vehicle to provide preventive services to children, such as measurement of height and weight, nutrition assessment and counseling, immunizations, blood pressure screening, and cholesterol and other appropriate laboratory tests. State Medicaid programs must provide EPSDT services at intervals which meet reasonable standards of medical and dental practice as determined by the state and as medically necessary to determine the existence of a suspected illness or condition. Accordingly, states either develop their own periodicity schedules, that is, age-specific timetables that identify when EPSDT well-child check ups and other EPSDT services should occur, or they may adopt a nationally recognized schedule, such as that of the American Academy of Pediatrics, which recommends well-child check ups once each year or more frequently, depending on age. State periodicity schedules for fiscal year 2006 generally specified multiple well-child check ups per year for children aged 0 through 2, one well-child check up per year for children aged 3 through 5, and a well-child check up every 1 to 2 years for children aged 6 through 20. The Omnibus Budget Reconciliation Act of 1989 (OBRA 89) required the Secretary of HHS to set annual goals for children’s receipt of EPSDT services, and CMS established a yearly goal that each state provide EPSDT well-child check ups to at least 80 percent of the Medicaid children in the state who should receive one, based on the state’s periodicity schedule. Under the authority of OBRA 89, CMS also requires that states submit annual EPSDT reports known as the CMS 416. Along with other information, the CMS 416 captures the information used to measure progress toward the 80 percent goal. On the CMS 416, this information is termed the EPSDT participant ratio. For Medicaid adults, Medicaid programs generally are not required to cover preventive services. States operate their Medicaid programs within broad federal requirements which generally require states to cover certain mandatory benefit categories, such as “physician services” and provide states the choice to cover a range of additional optional benefit categories, thereby creating programs that may differ from state to state. As federal Medicaid law does not define preventive services or include these services under a mandatory benefit category, states can opt to cover various preventive services for adults under different categories. For example, states may choose to cover certain preventive services as part of “preventive, diagnostic, and screening services,” an optional benefit category under Medicaid. They may also choose to cover other specific preventive services such as cholesterol tests under other mandatory or other optional benefit categories. CMS officials said they do not track the specific preventive services covered for adults by each state Medicaid program. National survey data suggest that children in Medicaid under age 21 are at risk of certain health conditions, particularly obesity, that can be identified or managed by preventive services, and many are not receiving well-child check ups. The same surveys suggest that Medicaid adults are also an at- risk population—nearly 60 percent were estimated to have at least one health condition we reviewed that can be identified or managed by preventive services—and their receipt of preventive services varied widely depending on the service. Obesity is a serious health concern for children enrolled in Medicaid. NHANES examinations conducted from 1999 through 2006 suggest that nearly one in five children in Medicaid aged 2 through 20 (an estimated 18 percent) were obese. These rates of obesity are well above the Healthy People 2010 target goal of reducing to 5 percent the proportion of children nationwide who are obese or overweight (see fig. 1). Furthermore, about half (an estimated 54 percent) of the Medicaid children who were obese, or their parents, reported that the child had not previously been diagnosed as overweight. Among privately insured children, an estimated 14 percent were obese. The NHANES examinations also revealed that some children in Medicaid have other potentially serious health conditions that can be identified and managed by preventive services. Among Medicaid children aged 8 through 20 years, an estimated 4 percent had high blood pressure. Among Medicaid children aged 6 through 20 years, an estimated 10 percent had high cholesterol. These rates were generally similar to estimates for privately insured children. MEPS data from 2003 through 2006 suggest that many children in Medicaid do not regularly receive well-child check ups. Children in Medicaid are generally eligible for a well-child check up at least once every 1 to 2 years, but an estimated 41 percent of children in Medicaid aged 2 through 20 had not received a well-child check up during the previous 2-year period. This proportion varied by the children’s age: for example, an estimated 22 percent of children in Medicaid aged 2 through 4, 40 percent of children in Medicaid aged 5 through 7, and 48 percent of children in Medicaid aged 8 through 10 had not received a well-child check up during the previous 2 year period (see fig. 2). In comparison, the estimated proportions of privately insured children who had received a well-child check up were generally similar. Similarly, our analysis of MEPS data also showed that, for children in Medicaid, reported rates of receipt of certain specific preventive services that could occur during a well-child check up were correspondingly low. For example, an estimated 37 percent of children in Medicaid aged 2 through 20 had not had a blood pressure test, and an estimated 48 percent of children in Medicaid aged 2 through 17 had not received diet or exercise advice from a health care professional during the 2 years prior to the survey. The data suggest, however, that most children in Medicaid aged 2 through 17—an estimated 88 percent—had their height and weight measured by a health care professional during the 2 years prior to the survey. The estimated rates of receipt of blood pressure tests, height and weight measurement, and diet or exercise advice were generally similar for children in Medicaid and privately insured children. NHANES data suggest that a majority of adults in Medicaid aged 21 through 64 have at least one potentially serious health condition. An estimated 57 percent of Medicaid adults had obesity, diabetes, high cholesterol, high blood pressure, or a combination of these conditions. Obesity was the most common of these health conditions; an estimated 42 percent of adults in Medicaid aged 21 through 64 were obese (see fig. 3). As with children in Medicaid, the rate of obesity among adults aged 21 through 64 in Medicaid was well above national goals—the estimated 42 percent rate of obesity among Medicaid adults was nearly three times higher than the Healthy People 2010 target goal of 15 percent. The estimated rate of obesity among adults in Medicaid was also somewhat higher than the estimated rate among privately insured adults, which was 32 percent. Adults in Medicaid were almost twice as likely to have diabetes compared to privately insured adults: 13 percent of examined adults in Medicaid were estimated to have diabetes, compared to 7 percent of privately insured adults. Estimated rates of high blood pressure and high cholesterol were similar between both health insurance groups (see fig. 3). The NHANES interview data also suggest that a large proportion of adults in Medicaid found to have these health conditions may not have been aware of them prior to participation in the NHANES examination. An estimated 40 percent of adults in Medicaid found to have one or more of the health conditions we reviewed had at least one condition that they reported had not been previously diagnosed. The percentage of adults in Medicaid who reported that their health condition had not been previously diagnosed varied by condition: for example, an estimated 17 percent of adults in Medicaid with diabetes reported that this condition had not been previously diagnosed, while an estimated 35 percent of those with high cholesterol reported that this condition had not been previously diagnosed (see fig. 4). These estimates were similar to those of privately insured adults. MEPS data suggest that Medicaid adults’ receipt of recommended preventive services varied widely by service. For example, an estimated 93 percent of adults in Medicaid aged 21 through 64 received a blood pressure test during the 2 years prior to the survey. Similarly, an estimated 90 percent of women in Medicaid aged 21 through 64 received a cervical cancer screening during the 3 years prior to the survey. However, estimated rates of receipt were lower for other important recommended preventive services. For example, only an estimated 41 percent of adults in Medicaid aged 50 through 64 had ever received a colorectal cancer screening test. Similarly, estimates based on NHIS data suggest that only 33 percent of adults in Medicaid aged 21 through 64 with high blood pressure had received a screening test for diabetes within the past 3 years (see fig. 5). As compared to the privately insured adult population, MEPS and NHIS data show that a lower percentage of adults in Medicaid received certain recommended preventive services, in particular, mammograms, cholesterol tests, diabetes screening, or colorectal cancer screening, within recommended time frames. Medicaid and privately insured adults were estimated to be about equally likely to receive recommended blood pressure tests, diet or exercise advice, and influenza immunizations within recommended time frames. Most state Medicaid programs reported on our survey that they monitored and set goals for children’s utilization of certain preventive services. Most states also reported undertaking multiple initiatives since 2004 to promote preventive services. In response to our survey, most of the 51 state Medicaid programs reported that they monitored utilization of one or more preventive services by children in Medicaid. For example, when asked whether they monitored children’s utilization of Medicaid well-child check ups or health risk assessments, 42 states reported doing so. States less frequently reported monitoring utilization of specific services that could be provided during these well-child check ups, such as blood pressure tests or obesity screenings (see fig. 6). When asked the reasons why they were not conducting more monitoring of children’s utilization of preventive services in Medicaid (beyond federally required monitoring through the CMS 416), the top two reasons states chose were “administrative burden” and “technology challenges.” In addition to monitoring specific preventive services, about two-thirds of state Medicaid programs reported that the state had established its own target goals or benchmarks for children’s utilization of preventive services, in addition to the CMS goal that each state provide EPSDT well-child check ups to at least 80 percent of Medicaid children in a state who should receive one, based on the state’s periodicity schedule. For example, 33 states reported they had established utilization goals of their own, separate from CMS’s 80 percent goal, for children’s well-child check ups. Twenty-six states reported goals for the total number of any preventive services received, and 12 states reported utilization goals for at least one specific preventive service such as obesity screening, diabetes screening, blood pressure tests, cholesterol tests, or cervical cancer screening. States that had established goals often reported, however, that not all of their goals were being met. For example, of the states with a goal for children’s utilization of well-child check ups, 42 percent reported that the goal was not being met. The top two reasons states cited as reasons they believed they were not meeting utilization goals were beneficiaries missing appointments and beneficiaries or their families not being concerned about receiving preventive services. A few states also mentioned difficulties with tracking service utilization. Although most state Medicaid programs reported monitoring and setting goals for children’s utilization of preventive services, these efforts differ by type of service delivery system; programs more often monitor or set goals for services provided to children in managed care than for services provided to children in fee-for-service delivery systems. For example, of the 37 states reporting that at least some children in Medicaid were enrolled in managed care, 33 (89 percent) reported monitoring well-child check ups provided through managed care organizations. In contrast, of the 47 programs reporting that at least some children received services through a fee-for-service delivery system, 26 (55 percent) reported monitoring utilization of well-child check ups provided by fee-for-service providers. Similarly, goals for children’s utilization of preventive services were most often targeted to managed care organizations. For example, 25 of 37 states with children enrolled in Medicaid managed care organizations (68 percent) reported having established goals for the managed care organizations’ provision of well-child check ups, compared to 16 of 47 Medicaid programs (34 percent) with children in fee-for-service. Most state Medicaid programs (47), reported conducting multiple initiatives since 2004 to improve providers’ provision of preventive services to children in Medicaid, most commonly educating pediatric providers about coverage of preventive services (42 states), increasing payment rates for pediatric providers for office visits or specific preventive services (37 states), streamlining payment processing (29 states), and starting a provider advisory panel (29 states). States that had implemented one or more of the above four initiatives often viewed them as successful. About half of the states implementing them reported that the initiative had resulted in some improvement or major improvement. Most of the other half reported that they did not know the extent of improvement; only a few states reported that any of the initiatives had not resulted in improvement. State Medicaid programs also reported conducting several types of initiatives directed at Medicaid beneficiaries, such as encouraging children’s use of preventive services through direct mail or telephone outreach, and many also reported initiatives specifically targeted at reducing obesity in Medicaid children. For example, 37 states reported initiatives to educate providers to conduct obesity screening or counseling for Medicaid children, and 12 states reported implementing family-based childhood obesity prevention programs. Most state Medicaid programs reported that they choose to cover some but not all of the preventive services we asked about on our survey. Of the eight recommended services we asked about, the services that were most commonly reported as covered for adults were cervical cancer screenings and mammograms, which were covered by 49 and 48 states, respectively. Four additional preventive services were reported as covered for adults by three-quarters or more of the 51 states. These four services were diabetes screenings, cholesterol tests, colorectal cancer screenings, and influenza immunizations. The remaining two recommended services—intensive counseling for adults with obesity and intensive counseling for adults with high cholesterol—were reported as covered for adults by less than one- third of states. Thirteen states (25 percent) reported covering intensive counseling for obese adults and 14 states (27 percent) reported covering intensive counseling for adults with high cholesterol (see fig. 7). Thirty- nine states reported covering well-adult check ups or health risk assessments for adults, which provide an opportunity for delivering other recommended preventive services such as blood pressure tests and obesity screenings. (See appendix III for more detailed survey results.) In examining a selected, non-generalizable sample of 18 state Medicaid programs’ Medicaid managed care contracts, we found wide variation in the extent to which the contracts delineated coverage expectations for specific preventive services. As we have previously reported, specific and comprehensive contract language helps ensure that managed care organizations know their responsibilities and can be held accountable for delivering services. According to one expert on Medicaid managed care contracts, state Medicaid programs run the risk that managed care organizations may not cover certain services the program intends to cover if Medicaid managed care contracts lack specific and comprehensive contract language related to covered services. Three of the contracts did not specifically refer to any of the preventive services that state Medicaid programs reported were required to be covered by managed care organizations in those states. By contrast, two contracts specifically referred to all of the preventive services that the state reported covering. CMS oversight is primarily focused on children’s receipt of EPSDT services, and consists largely of collecting state EPSDT reports. CMS has conducted few reviews of EPSDT programs, including those that CMS 416 reports indicate have low participant ratios—the information used to assess progress toward CMS’s goal that each state provide EPSDT well- child check ups to at least 80 percent of the Medicaid children in the state who should receive one, based on the state’s periodicity schedule. For adults in Medicaid, CMS has issued some guidance related to preventive services and shared some best practices. CMS oversight of preventive services for children in Medicaid centers on the annual collection of the required CMS 416 report from each state Medicaid program on the provision of EPSDT services for children in Medicaid. We reported in 2001 that CMS 416 reports were often not timely or accurate, but since that time, CMS officials told us they had taken steps to improve the underlying data, and state and national health association officials concurred that the data has improved. For example, we reported in 2001 that underlying data for the CMS 416 may not be accurate in part because of incomplete data on service utilization by children in managed care. In 2007, we reported that officials from several states and national health associations stated that, although the CMS 416 was limited in its usefulness for oversight, the quality and completeness of the underlying data that states used to prepare the CMS 416, including the data collected from managed care organizations, had improved since 2001. State Medicaid programs’ CMS 416 reports continue to show gaps in the provision of EPSDT services to Medicaid children. CMS uses the participant ratio from the CMS 416 to measure progress toward CMS’s goal that each state provide EPSDT well-child check ups to at least 80 percent of the Medicaid children in the state who should receive one, based on the state’s periodicity schedule. By contrast, in fiscal year 2007, the national average participant ratio among 51 states reporting on the CMS 416 was 58 percent, and no state reported a ratio of 80 percent or more. Individual states reported ratios ranging from 25 to 79 percent, and 11 states had ratios under 50 percent (see fig. 8). Participant ratios from fiscal years 2000 through 2006 are generally consistent with those in fiscal year 2007, though there is some variation between years. For example, in fiscal year 2006, 2 states reported participant ratios greater than 80 percent, and 15 states reported ratios under 50 percent. Although the completeness and accuracy of the CMS 416 data may have improved in recent years, according to agency officials, the CMS 416 is still limited for oversight purposes. For example, the form does not differentiate between the delivery of services for children in managed care and fee-for-service programs or illuminate possible factors contributing to low receipt of services. We reported in 2007 that many officials from national health associations told us the CMS 416 did not provide enough information to allow CMS to assess the effectiveness of states’ EPSDT programs. One official who works with many state Medicaid agencies told us that states do not generally use the CMS 416 to inform their monitoring and quality improvement activities. In addition to collecting the CMS 416, CMS officials also oversee the provision of preventive services to children in Medicaid through occasional reviews of individual state EPSDT programs, which are conducted by CMS regional offices; we previously reported such reviews were helpful in illuminating policy and process concerns as well as innovative practices of states. The reviews look at how states meet statutory requirements—such as ensuring that all eligible Medicaid beneficiaries under 21 are informed of and have access to EPSDT services—and are conducted with the intent of identifying deficiencies and providing recommendations and guidance to states to help improve their programs. For example, one review assessed a state’s performance in ensuring that managed care organizations and providers understood the benefits available under EPSDT and their respective responsibilities for providing these services. Another review investigated whether a state had developed an appropriate periodicity schedule and examined coordination of children’s care in the context of a managed care service delivery system. CMS’s EPSDT reviews have also examined data collection and reporting—for example, one review examined the extent to which a state collected CMS 416 data in accordance with instructions and used the data to measure progress and define areas for improvement. EPSDT program reviews can and have resulted in recommendations and corrective action plans intended to improve the provision of EPSDT services. The reviews have also highlighted best practices that could be emulated by other state Medicaid programs. Recommendations—which are, according to CMS officials, implemented at a state’s discretion—have included actions such as assessing potential impediments to timely access to EPSDT services, ensuring that providers are aware of how to access current data in order to monitor their efforts, and developing a state standard for timely access to services. For example, one review found that providers seemed confused about the health plans’ requirements for prior authorization and specialty referrals; CMS recommended that the state assess whether the providers’ understanding of prior authorization procedures was impeding timely access to EPSDT services and, if so, ensure that training was provided to correct the misunderstanding. Corrective action plans—upon which states must act, according to CMS officials—have included requirements for states to improve the process of informing beneficiaries, providers, and community partners about the support services available through Medicaid and how to access them, to develop an appropriate methodology to report data for the CMS 416, and to identify and implement strategies to increase vaccination of children against pneumonia. Best practices that reviews have identified have included a statewide EPSDT outreach effort to ensure that beneficiaries are aware of the availability of Medicaid services, a dance program that addresses childhood obesity, and the provision of Medicaid instructions and written materials in a patient’s primary language. With the exception of reviews specifically focused on dental services, CMS conducted only 11 EPSDT program reviews between April 2001 and June 2009, and few states with low participant ratios had been reviewed. For example, eight states reported participant ratios below 50 percent on all of their annual CMS 416 reports from fiscal years 2000 through 2007. Of those eight states, six had not had their EPSDT programs reviewed by CMS between April 2001 and June 2009. Although CMS has developed an EPSDT review guide to promote consistency, according to CMS officials there is no CMS directive or requirement for the CMS regional offices to perform these reviews, and CMS has not established criteria or a schedule for performing regular reviews. CMS oversight of preventive services for children in Medicaid also includes providing policy guidance to state Medicaid programs, such as through its State Medicaid Manual and other guidance; for example, CMS officials reported that they intend to draft guidance for states on coverage of obesity services as part of EPSDT services, but as of the time of our review had not done so. A 2006 study raised concerns that Medicaid providers may not be aware to what extent obesity services were covered or reimbursed under EPSDT, and that states’ provider manuals did not often explain this coverage. For example, the study found that state Medicaid manuals did not specifically discuss coverage of nutritional counseling, and that states may not have been correctly compensating providers whose practices emphasized appropriate obesity interventions. The study recommended that states take several steps, including clarifying the proper coding and payment procedures for obesity prevention and treatment services. As of the time of our review, CMS officials told us that they intend to draft policy guidance to address these concerns and that the guidance would suggest methods for reporting and charging for obesity-related services, but that they had not yet begun drafting this guidance. Unlike CMS’s oversight of children’s EPSDT services, CMS is not required to collect utilization data from states on adults’ receipt of services and— according to officials—does not conduct program reviews as it does for EPSDT services for children in Medicaid. CMS has, however, issued guidance for state Medicaid programs through State Medicaid Director Letters (SMDL) on topics relevant to adult preventive services. For example, one letter issued in 2004 provided guidance on how states could cover certain services, known as disease management services, to manage chronic health conditions such as diabetes in their Medicaid programs and discussed how new disease management models could be implemented by states. As of March 2009, CMS had not issued similar coverage guidance on other recommended preventive services we reviewed for adults, such as obesity screening and intensive counseling. Although CMS has issued some guidance through SMDLs, several state Medicaid programs expressed that additional guidance could be helpful. In response to an open-ended survey question on support state Medicaid programs would like from CMS related to preventive services, 12 states reported they would like more technical assistance and guidance from CMS. For example, one state reported that the state would like clarification of restrictions to coverage of preventive services and another reported it would like advice on how to monitor improvements in utilization of preventive services. In addition, four states expressed interest in CMS sharing best practices of other states. As of March 2009, there were 24 promising practices for Medicaid and CHIP on the CMS Web site; 8 of these pertained to preventive services for adults. The prevalence of obesity and other health conditions among Medicaid beneficiaries nationally suggests that more can and should be done to ensure this vulnerable population receives recommended preventive services. Although Medicaid children generally are entitled to coverage of EPSDT services that may identify and address health conditions such as obesity, both national survey data and states’ 416 reports to CMS suggest that children’s receipt of EPSDT services is well below national goals. Further, providers may not understand that services to screen for and manage obesity are covered under EPSDT. State-specific reviews of EPSDT programs have helped identify needed improvements but too few have been done. For adults, states’ coverage of preventive services generally is not required, but USPSTF recommends certain preventive services for specific ages and risk groups, and such services can be covered by Medicaid if states choose to do so. National survey examination data suggest that the provision of recommended services could benefit adults in Medicaid, as 6 in 10 adults in Medicaid have one or more potentially preventable health conditions. States and CMS have acted in recent years to improve the provision and monitoring of preventive services for the Medicaid population. CMS intends to develop policy guidance for obesity services for Medicaid children under EPSDT, though as of the time of our review, had not done so. However, gaps in provision of services remain. An estimated 41 percent of Medicaid children aged 2 through 20 participating in a nationally representative survey had not received a well-child check up during a 2-year period, and receipt of recommended preventive services in the adult Medicaid population varied widely, depending on the service. Improved access to preventive services for Medicaid beneficiaries will take a concerted effort by the federal government and states. To improve the provision of preventive services to the Medicaid population, we recommend that the Administrator of CMS take the following two actions: Ensure that state EPSDT programs are regularly reviewed to identify gaps in provision of EPSDT services to children and to identify needed improvements. Expedite current efforts to provide policy guidance on coverage of obesity-related services for Medicaid children, and consider the need to provide similar guidance regarding coverage of obesity screening and counseling, and other recommended preventive services, for adults. We provided a draft of this report to HHS for comment, and CMS responded on behalf of HHS. (See app. IV.) CMS concurred with both of our recommendations, and commented that the agency recognizes the need for and the value of preventive services, and will remind states of the importance of ensuring that children receive a comprehensive well-child check up, and of the importance of providing preventive services to adults. CMS agreed with our recommendation that the agency ensure state EPSDT programs are regularly reviewed. CMS committed to establishing a training program and protocol for the state reviews and technical assistance by the end of the year and also commented that it intends to conduct related efforts, including developing a comprehensive work plan to establish a regular schedule for reviewing state policy and implementation efforts and reviewing and revising the CMS 416. CMS also agreed with our recommendation that the agency expedite efforts to provide guidance to states on coverage of obesity-related services for Medicaid children, and consider the need to provide similar guidance regarding coverage of obesity screening and counseling, and other recommended preventive services, for adults. CMS committed to providing guidance on obesity-related services for children through an SMDL by the end of the calendar year. CMS also highlighted the agency’s involvement in several initiatives related to childhood obesity at the national level and the agency’s support of the development of new Healthcare Effectiveness Data and Information Set measures that address obesity. CMS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of HHS and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix V. The National Health and Nutrition Examination Survey (NHANES), conducted multiple times since the early 1960s by the Department of Health and Human Services’s (HHS) National Center for Health Statistics of the Centers for Disease Control and Prevention (CDC), is designed to provide nationally representative estimates of the health and nutrition status of the noninstitutionalized civilian population of the United States. NHANES provides information on civilians of all ages. Prior to 1999, three periodic surveys were conducted. Since 1999, NHANES has been conducted annually. For this study, we examined data from 1999 through 2006 on children aged 2 through 20 and adults aged 21 through 64. We grouped NHANES data from 1999 through 2006 in order to include a sufficient number of survey participants to provide a reliable basis for assessing the extent of health conditions in the Medicaid population. To assess the reliability of NHANES data, we interviewed knowledgeable officials, reviewed relevant documentation, and compared the results of our analyses to published data. We determined that the NHANES data were sufficiently reliable for the purposes of our engagement. Our analysis of NHANES data focused on physical examinations and laboratory tests for a variety of health conditions. As part of an overall physical examination of survey participants, trained medical personnel generally obtain a blood sample and administer laboratory tests such as measurement of total blood cholesterol and glucose levels, obtain height and weight measurements, and conduct three or four blood pressure readings. To analyze these data, we considered two categories of survey participants based on their health insurance status at the time of the survey, as reported during the interview section of the survey: Medicaid beneficiaries and the privately insured. We do not present results for the uninsured, those with other forms of government health insurance, such as Medicare (we excluded adults enrolled in both Medicare and Medicaid), and those who provided no information on their health insurance status. For the 1999 through 2004 period, the NHANES Medicaid category for children includes some children enrolled in the State Children’s Health Insurance Program (CHIP). In the 2005 through 2006 NHANES data, children enrolled in CHIP can be differentiated from children enrolled in Medicaid, but we grouped these children together for consistency with the previous time period. We estimate that about 84 percent of these children were enrolled in Medicaid with the remainder enrolled in CHIP between 1999 and 2006. For children, we used the NHANES data to estimate the percentage who were obese, the percentage with high blood pressure, the percentage with high blood cholesterol, and the percentage of obese children who had not been diagnosed as overweight (see tables 1 and 2). Obesity. NHANES data included measures of the height and weight of children aged 2 through 20. Obesity in children aged 2 through 19 was defined as having a body mass index (BMI) equal to or greater than 95th percentile of age and sex-specific BMI, based on CDC growth charts for the United States; obesity in children age 20 was defined as having a BMI of 30 or higher. Girls who were pregnant were not included in the obesity analysis. Children or their parents were also asked if the child had been diagnosed as overweight prior to participating in the survey. High Blood Pressure. NHANES data included up to four blood pressure readings for children aged 8 through 20. We calculated average systolic and diastolic blood pressure based on the second, third, and fourth readings. High blood pressure in children aged 8 through 17 was defined as equal to or greater than 95th percentile of age, height, and sex-specific average systolic or diastolic blood pressure, based on blood pressure tables from HHS’s National Heart, Lung, and Blood Institute. High blood pressure in children aged 18 through 20 was defined as having an average systolic blood pressure reading of 140 millimeters of mercury (mmHg) or higher, or having an average diastolic blood pressure of 90 mmHg or higher. High Blood Cholesterol. NHANES data included measures of total blood cholesterol in children aged 6 through 20. High total blood cholesterol in children aged 6 through 20 was defined as greater than or equal to 200 milligrams per deciliter (mg/dL). For adults aged 21 through 64, we used NHANES data to estimate the percentage who were obese, the percentage with high blood pressure, the percentage with high blood cholesterol, the percentage with diabetes, and the percentage with a combination of these conditions. We used CDC definitions of these health conditions. Of adults with each of these conditions, we also estimated the percentage who reported that their condition had not been diagnosed by a health care professional prior to the survey (see tables 3 and 4). Obesity. NHANES examinations of adults included height and weight measurements. Obesity for adults was defined as having a BMI of 30 or higher (pregnant women were not included in the obesity analysis). High Blood Pressure. NHANES examinations of adults included up to four blood pressure readings. Average systolic and diastolic blood pressure readings were calculated as described for children (see footnote 62). High blood pressure for adults was defined as having an average systolic blood pressure reading of 140 mmHg or higher, having an average diastolic blood pressure reading of 90 mmHg or higher, or taking blood pressure lowering medication. High Blood Cholesterol. NHANES laboratory tests for adults included measurement of blood cholesterol. High total blood cholesterol for adults was defined as 240 mg/dL or more. Diabetes. A subsample of NHANES participants, those whose examination was scheduled in the morning, were asked to fast prior to having their blood drawn. Laboratory tests for this subsample of NHANES participants included measurement of fasting plasma glucose. Diabetes for adults was defined as fasting plasma glucose of 126 mg/dL or more, or having previously been diagnosed with diabetes. For all estimated percentages for children and adults, we calculated a lower and upper bound at the 95 percent confidence level (there is a 95 percent probability that the actual percentage falls within the lower and upper bounds), of beneficiaries in each of the two insurance categories using raw data and the appropriate sampling weights and survey design variables. We used the standard errors of the estimates to calculate whether any differences between the two insurance groups were statistically significant at the 95 percent confidence level. The Medical Expenditure Panel Survey (MEPS), administered by the Department of Health and Human Services’s (HHS) Agency for Healthcare Research and Quality (AHRQ), collects data on the use of specific health services. We analyzed results from the MEPS household component, which collects data from a sample of families and individuals in selected communities across the United States, drawn from a nationally representative subsample of households that participated in the prior year’s National Health Interview Survey (NHIS, a survey conducted by the National Center for Health Statistics at the Centers for Disease Control and Prevention (CDC)). We pooled MEPS data from multiple years to yield sample sizes large enough to generate reliable estimates for the Medicaid subpopulation. Our analysis was based on data from surveys conducted in 2003 through 2006, the most recent data available. We supplemented our MEPS analysis with analysis of data from the 2006 NHIS survey, which covered a question of interest that was not available in MEPS. It was possible to use one year of the NHIS data because the sample size is larger than MEPS. To determine the reliability of the MEPS and NHIS data, we spoke with knowledgeable agency officials and reviewed related documentation and compared our results to published data. We determined that the MEPS and NHIS data were sufficiently reliable for the purposes of our engagement. The MEPS household interviews feature several rounds of interviewing covering 2 full calendar years. MEPS is continuously fielded; each year a new sample of households is introduced into the study. MEPS collects information for each person in the household based on information provided by one adult member of the household. This information includes demographic characteristics, self-reported health conditions, reasons for medical visits, use of medical services including preventive services, and health insurance coverage. We analyzed responses to MEPS questions about children’s medical visits and children’s and adults’ receipt of preventive services. NHIS collects information about demographic characteristics, health conditions, use of medical services, and health insurance coverage. We analyzed responses to an NHIS question on adults’ receipt of a diabetes screening test. As with the National Health and Nutrition Examination Survey (NHANES) data described in appendix I, we analyzed results for children under age 21 and adults aged 21 through 64, divided into two categories on the basis of their health insurance status. Unless noted, we used age and insurance status variables that were measured during the same interview as the questions about preventive services. Similar to NHANES, the Medicaid category in MEPS included children enrolled in the State Children’s Health Insurance Program (CHIP). We estimate that 82 percent were enrolled in Medicaid with the remainder enrolled in CHIP between 2003 and 2006. Our NHIS analysis was limited to adults. For children, we analyzed data for several different MEPS questions to examine children’s receipt of well-child check ups and specific preventive services (see tables 5 and 6). Well-Child Check Up. The MEPS survey included questions about office based and outpatient medical visits for children aged 0 through 20. We considered a medical visit to be a well-child check up if the visit was in person and if the respondent reported that the reason for the visit was either: a well-child check up, a general examination, or shots and immunizations. Using sampling weights, for each health insurance category, we estimated the percentage of children aged 2 through 20 at the end of the survey’s 2-year period who had received one or more well-child check ups during the survey’s 2-year period. We used insurance status variables that were measured at the end of the survey’s 2-year period. We used MEPS longitudinal weights to facilitate this analysis of medical visits that occurred during the 2-year survey period. The pooled 2-year survey periods analyzed were 2003 through 2004, 2004 through 2005, and 2005 through 2006. Blood Pressure Test. MEPS included questions about whether children aged 2 through 20 had their blood pressure measured by a doctor or health care professional, and if so, how long ago. Using sampling weights, we estimated the percentage of children in each health insurance category that had their blood pressure measured during the 2 years prior to the question being asked. Diet or Exercise Advice. MEPS included questions about whether children aged 2 through 17 had (1) received advice about eating healthy from a doctor or health care professional, and if so, how long ago, and (2) received advice about exercise, sports, or physically active hobbies from a doctor or health care professional, and if so, how long ago. Using sampling weights, we estimated the percentage of children in each health insurance category that had received advice about either a healthy diet or exercise, during the 2 years prior to the question being asked. Height and Weight Measurement. MEPS included questions about whether children aged 0 through 17 had (1) had their height measured by a doctor or health care professional, and if so how long ago; and (2) had their weight measured by a doctor or health care professional, and if so, how long ago. Using sampling weights, we estimated the percentage of children in each health insurance category that had both their height and their weight measured during the two years prior to the question being asked. Height and weight were not necessarily measured at the same time, and these measurements did not necessarily take place in the context of a body mass index (BMI) calculation or obesity screening. For adults aged 21 through 64, we analyzed data for several different MEPS questions that related to receipt of recommended preventive services (see table 7). It was not possible to determine whether respondents received these services for screening purposes, as recommended by the United States Preventive Services Task Force (USPSTF), as opposed to receiving them for purposes of diagnosing a suspected health condition. Nevertheless, the estimates are useful in indicating the maximum percentages of adults who may have received certain recommended preventive services. For example, if 40 percent of adults aged 50 through 64 reported receiving a colorectal cancer screening, some may have received the screen for diagnostic purposes after experiencing symptoms of colorectal cancer. Regardless, in this example, 60 percent of adults in this age range—for whom colorectal cancer screening is recommended by the USPSTF—did not receive a colorectal cancer screening for any reason. Blood Pressure Test. MEPS included questions about whether adults had their blood pressure measured by a doctor or health care professional, and if so, how long ago. Using sampling weights, we estimated the percentage of adults aged 21 through 64 in each health insurance category who reported that they had their blood pressure measured during the 2 years prior to the question being asked. Cholesterol Test. MEPS included questions about whether adults had their cholesterol tested by a doctor or health care professional, and if so, how long ago. Using sampling weights, we estimated the percentage of adults in each health insurance category for whom a cholesterol test was recommended, who reported that they had their cholesterol tested during the five years prior to the question being asked. USPSTF recommends cholesterol tests for men aged 35 and older, and men and women aged 20 and older with health conditions that are risk factors for heart disease. We used available information about risk factors for heart disease that was self-reported by survey participants to determine whether a cholesterol test was recommended on this basis; these risk factors were diabetes, high blood pressure, or BMI greater than or equal to 30. Mammogram. MEPS included questions about whether women had a mammogram, and if so, how long ago. Using sampling weights, we estimated the percentage of women aged 40 through 64 in each health insurance category who reported that they had a mammogram during the 2 years prior to the question being asked. Cervical Cancer Screening. MEPS included questions about whether women had a cervical cancer screening, and if so, how long ago. Using sampling weights, we estimated the percentage of women aged 21 through 64 in each health insurance category who had not reported having a hysterectomy and who reported that they had a cervical cancer screening during the 3 years prior to the question being asked. Colorectal Cancer Screening. MEPS included questions about whether adults had a colonoscopy, a sigmoidoscopy, or a stool test, and if so, how long ago. Using sampling weights, we estimated the percentage of adults aged 50 through 64 in each health insurance category who reported that they had ever had one of these tests. Influenza Immunization. MEPS included questions about whether adults had received a flu shot, and if so, how long ago. Using sampling weights, we estimated the percentage of adults aged 50 through 64 in each health insurance category who reported that they had a flu shot during the year prior to the question being asked. Diet or Exercise Advice. MEPS included questions about whether adults had received advice from a doctor or health care professional to (1) eat fewer high fat or high cholesterol foods, or (2) exercise more. Using sampling weights, we estimated the percentage of adults aged 21 through 64 in each health insurance category, whose self reported height and weight corresponded to a BMI of 30 or higher, who reported that they had ever received either diet or exercise advice. This type of advice does not fulfill the USPSTF recommendation that obese adults receive sustained intensive obesity counseling, but it provides an indicator of the maximum proportion of adults who could have received such counseling. Diabetes Screening. MEPS interviews from 2003 through 2006 did not ask about adults’ receipt of diabetes screening tests, but the 2006 NHIS did; adults who had not previously been diagnosed with diabetes were asked if they had been tested for high blood sugar or diabetes in the last 3 years. Using NHIS sampling weights, we estimated the percentage of adults aged 21 through 64 in each health insurance category, who reported having high blood pressure and who reported that they had received a screening test for diabetes during the 3 years prior to answering the question. USPSTF recommends diabetes screening for adults with high blood pressure. For all estimated percentages for children and adults, we calculated a lower and upper bound at the 95 percent confidence level using the appropriate sampling weights and survey design variables. We used the standard errors of the estimates to calculate if any differences between the insurance groups were statistically significant at the 95 percent confidence level. To gather information about state Medicaid programs’ coverage, oversight, and promotion of preventive services, we surveyed 51 state Medicaid directors (in the 50 states and the District of Columbia). The survey was conducted from October 29, 2008, through February 6, 2009. It included questions on the coverage of preventive services for adults, the methods used for oversight of preventive services for children and adults, including monitoring of utilization of specific services, utilization goals, including whether or not goals were being met, state promotion efforts and specific initiatives aimed at preventive services, and the federal support provided to state Medicaid programs for the provision of preventive services. Many of the survey questions asked state Medicaid directors to consider specific Medicaid populations such as children in Medicaid under age 21 or adults in Medicaid age 21 and over, or beneficiaries enrolled in managed care organizations (MCO) or fee-for-service (FFS). We developed the content of the survey based on interviews with officials from the Centers for Medicare & Medicaid Services (CMS) and state Medicaid programs, and a review of documents from CMS and external reports. Some content and changes were made after pre-testing with state Medicaid programs. Many of our survey questions focused on specific preventive services. For example, the survey included questions about states’ coverage for adults, and monitoring for adults and children, of several specific preventive services including well-child and well-adult check ups, health risk assessments, diabetes screening, cholesterol tests, cervical cancer screening, mammography, colorectal cancer screening, and influenza immunization. We asked about these specific preventive services because they were related to recommended preventive services and to the services we examined in our analysis of Medical Expenditure Panel Survey (MEPS) and National Health Interview Survey (NHIS) data (see appendix II). We did not ask about coverage of services for children because the children’s Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) benefit is required to be covered under Medicaid. To establish the reliability of our survey data, we spoke with knowledgeable agency officials in developing the survey, pre-tested the survey questions, and followed up with state Medicaid officials to achieve a 100 percent response rate. Survey responses were submitted electronically. In a few cases, when states gave responses that were unclear or signaled the question was not completed, we followed up with states to clarify their responses in order to ensure that their responses contained the most accurate and current information available. We determined that the data submitted by states were sufficiently reliable for the purposes of our engagement. In addition to the individual named above, Katherine M. Iritani, Acting Director; Emily Beller; Susannah Bloch; Elizabeth Deyo; Erin Henderson; Martha Kelly; Teresa Tam; and Hemi Tewarson made key contributions to this report. Medicaid: Extent of Dental Disease in Children Has Not Decreased, and Millions Are Estimated to Have Untreated Tooth Decay. GAO-08-1121. Washington, D.C.: September 23, 2008. State Children’s Health Insurance Program: Program Structure, Enrollment and Expenditure Experiences, and Outreach Approaches for States That Cover Adults. GAO-08-50. Washington, D.C.: November 26, 2007. Medicaid: Concerns Remain about Sufficiency of Data for Oversight of Children’s Dental Services. GAO-07-826T. Washington, D.C.: May 2, 2007. Childhood Obesity: Factors Affecting Physical Activity. GAO-07-260R. Washington, D.C.: December 6, 2006. Childhood Obesity: Most Experts Identified Physical Activity and the Use of Best Practices as Key to Successful Programs. GAO-06-127R. Washington, D.C.: October 7, 2005. Medicaid Managed Care: Access and Quality Requirements Specific to Low-Income and Other Special Needs Enrollees. GAO-05-44R. Washington, D.C.: December 8, 2004. Medicare Preventive Services: Most Beneficiaries Receive Some but Not All Recommended Preventive Services. GAO-04-1004T. Washington, D.C.: September 21, 2004. Medicaid and SCHIP: States Use Varying Approaches to Monitor Children’s Access to Care. GAO-03-222. Washington, D.C.: January 14, 2003. Medicare: Most Beneficiaries Receive Some but Not All Recommended Preventive Services. GAO-03-958. Washington, D.C.: September 8, 2003. Medicare: Use of Preventive Services Is Growing but Varies Widely. GAO-02-777T. Washington, D.C.: May 23, 2002. Medicare: Beneficiary Use of Clinical Preventive Services. GAO-02-422. Washington, D.C.: April 10, 2002. Medicaid: Stronger Efforts Needed to Ensure Children’s Access to Health Screening Services. GAO-01-749. Washington, D.C.: July 13, 2001. Lead Poisoning: Federal Health Care Programs Are Not Effectively Reaching At-Risk Children. HEHS-99-18. Washington, D.C.: January 15, 1999. Medicaid Managed Care: Challenge of Holding Plans Accountable Requires Greater State Effort. GAO/HEHS-97-86. Washington, D.C.: May 16, 1997. Medicare: Provision of Key Preventive Diabetes Services Falls Short of Recommended Levels. T-HEHS-97-113. Washington, D.C.: April 11, 1997.
Medicaid, a federal-state program that finances health care for certain low-income populations, can play a critical role in the provision of preventive services, which help prevent, diagnose, and manage health conditions. GAO examined available data to assess (1) the extent to which Medicaid children and adults have certain health conditions and receive certain preventive services, (2) for Medicaid children, state monitoring and promotion of the provision of preventive services, (3) for Medicaid adults, state coverage of preventive services, and (4) federal oversight by the Centers for Medicare & Medicaid Services (CMS). GAO analyzed data from nationally representative surveys: the National Health and Nutrition Examination Survey (NHANES), which includes physical examinations of participants, and the Medical Expenditure Panel Survey (MEPS). GAO also surveyed state Medicaid directors and interviewed federal officials. Nationally representative data suggest that a large proportion of children and adults in Medicaid have certain health conditions, particularly obesity, that can be identified or managed by preventive services, and adults' receipt of preventive services varies widely. For Medicaid children, NHANES data from 1999 through 2006 suggest that 18 percent of children aged 2 through 20 were obese, 4 percent of children aged 8 through 20 had high blood pressure, and 10 percent of children aged 6 through 20 had high cholesterol. Furthermore, MEPS data from 2003 through 2006 suggest that many Medicaid children were not receiving well-child check ups. For Medicaid adults aged 21 through 64, NHANES data suggest that more than half were obese or had diabetes, high cholesterol, high blood pressure, or a combination. MEPS data suggest that receipt of preventive services varied widely by service: receipt of some services, such as blood pressure tests, was high, but receipt of several other services was low. MEPS data also suggest that a lower percentage of Medicaid adults received preventive services compared to privately insured adults. For children in Medicaid, who generally are entitled to coverage of comprehensive health screenings, including well-child check ups, as part of the federally required EPSDT benefit, most but not all states reported to GAO that they monitored or set goals related to children's utilization of preventive services and had undertaken initiatives to promote their provision. Nine states reported that they did not monitor children's utilization of specific preventive services. Forty-seven states reported having multiple initiatives to improve the provision of preventive services to children. For adults in Medicaid, for whom states' Medicaid coverage of preventive services is generally not required, most states reported to GAO that they covered most but not all of eight recommended preventive services that GAO reviewed. Nearly all state Medicaid programs, 49 and 48 respectively, reported covering cervical cancer screening and mammography, and three-quarters or more states reported covering four other preventive services. Two additional recommended services--intensive counseling to address obesity or to address high cholesterol--were reported as covered by fewer than one-third of states. For children in Medicaid, CMS oversees the provision of preventive services through state EPSDT reports and reviews of EPSDT programs, but gaps in oversight remain; for adults, oversight is more limited. For children, state reports showed that, on average, 58 percent of Medicaid children who were eligible for an EPSDT service in 2007 received one; far below the federal goal of 80 percent. CMS reviewed only 11 state EPSDT programs between April 2001 and June 2009. Few states reporting low rates of service provision were reviewed. CMS guidance to states may also have gaps: a 2006 study raised concerns that providers may not be aware of coverage of obesity-related services for Medicaid children. CMS has recognized the need for but has not yet begun drafting guidance on such coverage. For adults, CMS has provided some related guidance to states, but not on the reviewed preventive services.
Charter schools are public schools created to achieve a number of goals, including encouraging innovation in public education and addressing failing schools. Charter schools operate with more autonomy than traditional public schools in exchange for agreeing to improve student achievement, an agreement that is formalized in a contract or charter with the school’s authorizing body. From about 3,000 charter schools in school year 2003-2004 to almost 5,000 in school year 2009-20010, the number of charter schools in the United States continues to grow. Spurring this growth are parents’ and others’ desire for schools that reflect their vision of public education, and federal incentives, such as the recent $4 billion Race to the Top (RTT) competitive grant fund, which among other things, encourages the growth of high performing charter schools, and the Charter Schools Program Grants for Replication and Expansion of High Quality Charter Schools. States specify which entities within the state can authorize the establishment of a charter school, including state departments of education, state boards of education, school districts or local educational agencies (LEA), institutions of higher education, and municipal governments. Some states have also created independent charter school boards that can authorize charter schools in the state. Once charter schools are in operation, the authorizer is generally responsible for monitoring school performance and has authority to close the school or take other actions if academic goals or state financial requirements are not met. States also define how charter schools will be structured and they do so in different ways (see fig. 1). For example, unlike traditional public schools that are part of a larger LEA, some states establish charter schools as their own LEA. Other states require them to be part of a larger LEA, while still other states allow charter schools the option to choose between being a distinct LEA or part of a larger LEA. Further, some states allow charter schools to be their own LEA for some purposes and part of a larger LEA for others, including for purposes of special education. With respect to special education, two common practices are that (1) in states that define a charter school to be a part of a larger LEA, the responsibility for providing special education services to charter school students with disabilities remains with that LEA and (2) in states where charter schools are their own LEA, the state makes charter schools responsible for providing the services themselves. Like traditional public schools, charter schools are subject to a number of federal requirements. Section 504 of the Rehabilitation Act of 1973 and (IDEA), as amended, are the Individuals with Disabilities Education Actthe two primary laws that address the rights of students with disabilities to education. IDEA was enacted in 1975 and authorizes federal funding for special education and related services. For states that accept IDEA funding, the statute sets out detailed requirements regarding the provision of special education, including the requirement that children with disabilities receive a free appropriate public education. In addition, under IDEA, states must ensure that an Individualized Education Program (IEP) is developed and implemented for each student with a disability. The IEP process creates an opportunity for teachers, parents, school administrators, related services personnel, and students (when appropriate) to work together to improve educational results for children with disabilities. These requirements apply in public charter schools just as they do in traditional public schools. IDEA provides funding and assigns responsibility for complying with requirements to states, and through them, to LEAs. In ensuring that IDEA requirements are met for students with disabilities attending charter schools, states may retain that responsibility or assign it to the charter school LEA, the larger LEA to which the charter school belongs, or some other public entity. Section 504 of the Rehabilitation Act, enacted in 1973, is a civil rights statute that prohibits discrimination against an otherwise qualified individual with a disability solely by reason of disability in any program or activity receiving federal financial assistance or under any program or activity conducted by an executive agency. Education’s Section 504 regulation states that no qualified person with a disability shall, on the basis of disability, be excluded from participation in, be denied the benefits of, or otherwise be subjected to discrimination under any program or activities which receives federal financial assistance. Subpart D of Education’s regulation contains specific requirements regarding elementary and secondary education, including the provision of a free appropriate public education (FAPE) to each qualified person with a disability in the recipient’s (recipient of federal financial assistance) jurisdiction, regardless of the nature or severity of the person’s disability.state must comply with Section 504 if it receives other federal financial assistance. Education’s Office for Civil Rights (OCR) enforces Section 504 for the department’s programs through investigation of complaints and compliance reviews that are initiated by the department. Even if a state declines IDEA funds, the Title II of the Americans with Disabilities Act of 1990, as amended, prohibits discrimination based on disability in public entities, including schools. The Department of Justice and OCR both have jurisdiction to investigate complaints under this title. Charter schools enrolled a lower percentage of students with disabilities than traditional public schools in both school years 2008-2009 and 2009- 2010 (see fig. 2). For example, in school year 2009-2010, there was about a 3 percentage point difference between the percentage of students with disabilities enrolled in traditional public schools and charter schools. As shown in figure 2, the percentage of students with disabilities in charter schools increased slightly between the 2 school years we examined, while the percentage of students with disabilities in traditional public schools stayed about the same. When examining enrollment levels of students with disabilities in traditional public schools and charter schools for individual states, a more varied picture emerges. In most states, charter schools enrolled a lower percentage of students with disabilities when compared to traditional public schools. For example, in the state of New Hampshire, about 6 percent of students in charter schools were students with disabilities compared to about 13 percent of students in traditional public schools. However, in eight states—Iowa, Minnesota, Nevada, New Mexico, Ohio, Pennsylvania, Virginia, and Wyoming—charter schools enrolled the same percentage or a higher percentage of students with disabilities than traditional public schools in the state (see fig. 3). For example, in Wyoming, the enrollment level of students with disabilities in charter schools was about 4 percentage points greater than in traditional public schools. We also found that, relative to traditional public schools, the proportion of charter schools that enrolled high percentages of students with disabilities was lower overall and generally tapered off the greater the enrollment of students with disabilities. Specifically, the enrollment of students with disabilities was 8 to 12 percent at 23 percent of charter schools and 34 percent of traditional public schools. Further, when the enrollment of students with disabilities reached 12 to16 percent, about 13 percent of charter schools compared to 25 percent of traditional public schools had these enrollment levels. However, when compared to traditional public schools, a higher percentage of charter schools enrolled more than 20 percent of students with disabilities. During an interview with Education, an official noted that there has been an increase in charter schools for students with disabilities, such as schools for students with autism, for example, which may help explain this difference. Most of the 13 charter schools we visited reported using multiple strategies to publicize the availability of special education services in their school and the charter school’s presence in the community. For example, some charter school officials mentioned word-of-mouth as a way of informing parents about their school. Some also reported distributing fliers in the community, mailing fliers to parents of every kindergarten student or 5th grader, or placing ads in the local newspaper or other media. Some schools said that they did not specifically target students with disabilities. Education collects data on the number of students with disabilities, ages 14 through 21 only, who exited special education. Therefore, there are no comprehensive data for all school-aged students who leave special education. In combination with these more informal strategies, many of the charter schools we visited also said that they held an open house or meeting during which prospective students and their parents could visit the school, ask questions, and tour the school. Some saw the open houses as an opportunity to discuss the special education services they offered. Officials at one school said that their special education teachers attended the open house and discussed their program, including any limitations in the school’s special education offerings. Several of the charter schools could not accommodate all of the students wishing to enroll and held a lottery to determine admission. Some said that they had waiting lists and emphasized that they accepted students on a first come, first served basis, and thus give no preference to students with disabilities or other student subgroup. Many of the charter school officials we interviewed demonstrated awareness that inquiring about a student’s disability status on the charter school application might be perceived as an attempt to discourage enrollment and took steps to minimize the possibility. For example, in two of the states we visited, in charter schools that asked parents to fill out an application form, charter school officials said that the form did not ask questions about the student’s disability status. Once the child was accepted to the school and enrolled, some schools asked parents to fill out an enrollment form that asked for information about the child’s health history, and, if transferring from another school, about the child’s prior academic program, including receipt of special education services. Charter school officials emphasized that questions about disability status or prior receipt of special education services were not asked on the application form and made reference to state requirements that prohibited such questions before enrollment. According to state officials, such questions were prohibited to prevent charter school officials from using the information to identify students that were potentially more costly to serve and to attempt to discourage the parents from enrolling such students before an assessment of their needs was done. In contrast, some charter school officials in one of the three states we visited did include questions about receipt of special education services and whether the child had an IEP on the charter school application form. Officials representing the school acknowledged that the application includes such questions but said that they look at the application only for name, address and telephone number. Officials at another charter school reported that the school’s admission application collects information about whether a child has special needs, but discounted the accuracy of the information, saying that some parents of students with disabilities become confused about the services their child has received and the terminology. Many of the charter school officials we interviewed reported providing services specific to each child’s needs. The special education services offered by most of the charter schools we visited included speech and language therapy, occupational and physical therapy, counseling, and academic supports, usually in reading and math. Some charter schools visited offered vision, hearing, and behavioral supports and some mentioned providing technologies to assist students with more severe learning disabilities. Almost all of the charter schools we visited offered special education services to students in the regular classroom for most of the day, with “pull-out” sessions in a resource room for more focused services. The term “pull-out” sessions refers to the practice of providing special education services for students with disabilities in a place that is separate from the regular classroom. One school reported using “push-in” sessions, in which the special education teacher went into the classroom to provide special education services. Officials at three schools reported teaching students in a self-contained classroom, but some said they did not have the resources to provide that type of educational environment. One charter school official said that when a student’s IEP includes a service that the school does not offer, such as a self-contained classroom, the IEP committee has modified the IEP to accommodate facility limitations while still meeting the needs of the child. For example, that school offered more intensive services in the general classroom staffed by a general education teacher, a special education teacher and a teaching assistant, for students whose IEP specifies those services. When faced with a need for services by a child already enrolled that were greater than the charter school could provide, the charter schools we visited took different approaches. In charter schools where the district was responsible for placement, most of the charter school officials we interviewed said that the school district intervened to decide the appropriate placement for the child and inform the parents. In contrast, charter school LEAs took different approaches. One said that parents were told during an IEP meeting that the school could not serve certain severe disabilities. Before moving the child, officials reconvened the IEP meeting to consider the decision. Two others discussed the issue with the parents, but allowed them to make the decision on where to place the child, without reference to an IEP placement decision meeting. Officials representing about half of the 13 charter schools we visited said that having sufficient resources to serve students with more severe disabilities, including providing a self-contained classroom when needed, was their greatest challenge. For example, two officials said that their school facility could not provide a self-contained classroom. A third official explained that providing a self contained classroom is especially challenging because of the need to provide separate classrooms for each grade grouping as well as teachers. Thus, if a school had 3rd and 4th graders requiring self-contained classrooms, they would need to have space to accommodate two separate classrooms. The official said that the charter school would not have enough teachers to cover those different grade levels. According to representatives of charter school organizations we interviewed, providing services to students with severe disabilities can be very costly and some charter schools could face severe financial difficulties serving students with very severe disabilities. Charter schools that cited insufficient resources as a challenge included both charter school LEAs and charter schools within a district. Other resource challenges school officials cited included the cost of specialists’ services, and obtaining staff qualified to serve their students’ needs, such as a bilingual special education teacher or a specialist to teach an autistic child. However, two charter schools within a district said that, because the district provided all services needed, the cost of services was not a challenge. Both charter schools were located in the same school district. The Office for Civil Rights (OCR) is responsible for ensuring equal access to education through enforcement of the civil rights laws, including Section 504 of the Rehabilitation Act. OCR has issued regulations implementing Section 504 and conducts complaint investigations and compliance reviews to determine if entities that receive federal financial assistance from Education are in compliance with these regulations. The Section 504 regulations prohibit discrimination on the basis of disability by recipients and subrecipients of federal financial assistance from Education. The Section 504 regulations also require that entities that receive federal financial assistance from Education and that operate public elementary or secondary schools provide a free appropriate public education to qualified students with disabilities regardless of the nature or In addition, OCR issues guidance that explains severity of the disability.the requirements of the regulations and in 2000 issued “Applying Federal Civil Rights Laws to Public Charter Schools, Questions and Answers” about the civil rights requirements applicable to charter schools, including Section 504 requirements. OCR also provides technical assistance to school districts, parents, and other stakeholders regarding the requirements of Section 504. During fiscal year 2010, OCR told us that it had investigated complaints concerning students with disabilities in charter schools. According to OCR, more than 50 percent of all the complaints OCR received that year concerned disabilities, but of those complaints, about 2 percent were made against charter schools. OCR could not readily determine from its complaint management system how many of those complaints concerned admission to charter schools. OCR officials also said that OCR has several broad compliance reviews underway related to students with disabilities and charter schools. Four of 37 compliance reviews OCR began conducting in fiscal year 2011 focus on charter schools. Of these, two pertain to recruitment and admissions issues and two address FAPE. Officials said that because all of these reviews are currently ongoing, they were unable to share details of what they have found thus far. The officials said that their compliance reviews involve extensive investigations that may last up to a year and result in reports of findings and violations, if any, which are posted on OCR’s website. They said they thought that the ongoing reviews were the first that had included issues of students with disabilities and charter schools. In school year 2009-2010, approximately 3.6 percent of all students enrolled in public schools were enrolled in charter schools. which may include complaints of discrimination against students with disabilities by public schools, including charter schools. Justice’s Civil Rights Division conducts the investigations, and told us that its jurisdiction would include complaints related to admissions issues, including the types of questions asked by charter schools in applications as well as schools’ practices and procedures for serving students with disabilities. However, the Civil Rights Division’s data collection system does not capture the number of complaints it received by type of disability or type of school. In 2000, Education both issued its guidance on applying federal civil rights laws to public charter schools and sponsored an in-depth study highlighting issues about students with disabilities’ access to charter schools. However, although the number of charter schools has increased since the issuance of this guidance and research, Education has not updated its guidance, and officials in Education’s Program and Policy Studies Service and Institute for Education Sciences are not aware of further research that might address the challenges and issues confronting charter schools today. Education’s guidance addresses a number of issues, including issues related to the education of students with disabilities. For example, with respect to outreach and recruitment practices, the guidance provides that schools may not discriminate against students with disabilities, among others, and that recruiting efforts should be directed at all segments of the community served by the school, including students with disabilities. Regarding admissions, the guidance specifically states that charter schools may not categorically deny admission to students on the basis of disability, including a student’s need for special education or related aids and services. The guidance also notes that when an enrolled student is believed to have a disability, the school is required to follow appropriate procedures to identify and refer the student for evaluation in a timely manner. While the guidance does provide basic information about charter school practices concerning students with disabilities, it does not provide more detailed information on the acceptability of specific practices, such as asking on a charter school application form whether a child has a disability or previously had an IEP. Education also sponsored an in-depth study of students with disabilities’ access to charter schools in 2000. This study, issued by the Office of Educational Research and Improvement, examined some of the factors that may explain the difference in students with disabilities’ enrollment in charter schools and traditional public schools, most prominently highlighting a practice where parents of students with disabilities were being discouraged during the admissions process from enrolling their students in charter schools. The study, based on site visits to 35 charter schools, detailed a lack of fit between the curriculum and the student’s needs and insufficient resources as reasons given for discouraging enrollment of students with disabilities. At the time of this study, the charter school population was less than one third its current size, and this study may not fully explain the factors underlying lower enrollment levels in charter schools. Among the three state educational agencies (SEA) we visited, all have implemented measures addressing admission practices in some capacity. One SEA reported that it had developed detailed monitoring and guidance for charter schools concerning their responsibilities for serving students with disabilities. This SEA said that charter schools are advised of their IDEA responsibilities in the school’s application to the state for federal grant funds and in the state application to become a charter school. This SEA also reported that a nondiscrimination clause is included in the state’s charter school application, which it said precludes charter schools from asking for information about disability status or prior receipt of special education services in their applications for admission. Admission and enrollment forms are reviewed intensively as part of the charter school application and renewal process. A second SEA sponsors webinars and works with charter schools prior to schools opening so that charter schools have more opportunities to learn about the regulations and their responsibilities for educating students with disabilities before they open. For example, this SEA is developing a webinar on how to implement state charter school law requirements that set enrollment targets for students with disabilities for all charter schools. The law also required the SEA to develop a uniform, statewide charter school admission form. The SEA official we interviewed told us that the state’s admission form does not include questions concerning disability status. While parents’ needs and preferences may influence their decisions about whether or not to place their child in a charter school, the law requires charter schools to demonstrate a good-faith effort to recruit them. The third SEA also does not allow charter schools to ask applicants about anything related to their need for special education services at the time they apply for admission to the school. In contrast to the SEAs, the school district authorizers interviewed reported little monitoring of charter schools’ recruitment or special education service delivery plans. Against the backdrop of a growing and changing charter school landscape, we found that enrollment of students with disabilities in the aggregate is lower in charter schools than in traditional public schools. Whether these enrollment differences will persist or continue to narrow is difficult to predict, given the lack of information about factors underlying these differences and how they affect enrollment levels. By issuing guidance that raises awareness about the practices that might be perceived as an attempt to discourage enrollment, officials in the states we visited have already begun to take steps to forestall the possibility that charter school admission practices play a role in lower enrollment levels in charter schools. However, the guidance Education issued in 2000, while important in providing basic information to charter schools with respect to students with disabilities, does not provide more detailed information on the acceptability of specific admission practices under applicable civil rights laws. Moreover, while Education sponsored research several years ago that pointed out problems in charter school admission practices, we believe that the study’s findings do not adequately address the range of possible factors affecting enrollment raised in our report. To help charter schools recognize practices that may affect enrollment of students with disabilities and improve the information available for monitoring and oversight, we recommend that the Secretary of Education do the following: 1. Update existing guidance to ensure that charter schools have better information about their obligations related to the enrollment of students with disabilities. 2. Conduct additional fact finding and research to understand the factors affecting enrollment of students with disabilities in charter schools and act upon that information, as appropriate. We provided a draft of this report to the U.S. Department of Education for review and comment. The comments are reproduced in appendix IV. Education agreed with our findings and recommendations. Education commented that it is committed to providing meaningful updated guidance to its stakeholders and that it is actively working with the charter school community, parents, civil rights organizations, and other stakeholders to determine what additional questions are most pressing and what type of revised guidance would be useful. The department also said that it anticipates that the knowledge gained from the four compliance reviews currently underway will provide additional insights into compliance issues specific to charter schools that could inform the development of guidance. Further, Education said that based on information they have received to date, including information provided in our study, the department is considering additional or updated guidance for charter schools related to recruitment, admissions, accessibility, and the provision of a free appropriate public education (FAPE). With respect to our second recommendation, Education said that over the next several years, it proposes to examine issues underlying enrollment of students with disabilities in several ways. For example, it plans to conduct focus groups with parents of students with disabilities in a sample of communities with a larger charter school presence, compile a set of case studies of charter schools with both high and low enrollment of students with disabilities, and review state polices and guidance concerning students with disabilities in charter schools. Education also provided technical comments, which have been incorporated in the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Education. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or scottg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix discusses our methodology for examining enrollment levels of students with disabilities in charter schools and traditional public schools, the types of services charter schools provide, and the U.S. Department of Education’s (Education) role in ensuring students with disabilities’ access. The work was framed around three questions: (1) How do enrollment levels of students with disabilities in charter schools and traditional public schools compare, and what is known about the factors that may contribute to any differences? (2) How do charter schools reach out to students with disabilities and what special education services do charter schools provide? (3) What roles do the U.S. Department of Education, state educational agencies (SEA), and other entities that oversee charter schools play in ensuring students with disabilities’ access to charter schools? To compare enrollment levels of students with disabilities in charter schools and traditional public schools, we examined school-level data on counts of students with disabilities for those 41 states with operating charter schools in school years 2008-2009 and 2009-2010 only. To accurately compare enrollment levels, we did not include data for those 10 states without operating charter schools in our analysis. We conducted an analysis of the data at the aggregate level, as well as at the state level, since the aggregate analysis may mask differences in enrollment levels. To complement the aggregate analysis, we examined how charter schools reach out to students with disabilities and the types of services charter schools provide in selected states, and interviewed the relevant oversight agencies. To address the questions, we used several sources of data, including data for school years 2008-2009 and 2009-2010, the most recent data available at the time, from a custom data file provided to us by Education, which includes counts of students with disabilities at the school-level; site visit interviews with officials from charter schools and school districts in three states selected on the basis of states with a large number of charter schools, a mix in local educational agency (LEA) status and geographic diversity; and interviews with Education, Department of Justice, and SEA officials, and charter school authorizers. We also interviewed representatives of state and local charter school organizations and organizations representing parents of students with disabilities about their perspective on students with disabilities’ access to charter schools. Before deciding to use the data provided by Education, we conducted a data reliability assessment. We assessed the reliability of the data file that Education provided by (1) performing electronic data testing for obvious errors in accuracy and completeness, (2) reviewing existing information about the data and the system that produced the data, and (3) interviewing agency officials knowledgeable about these data. We determined that the data were sufficiently reliable for the purposes of this report. We discuss our assessment procedures and steps we took to mitigate any data limitations in more detail below, as part of the methodology for determining enrollment levels of students with disabilities in charter schools and traditional public schools. We conducted descriptive analyses of the students with disabilities data, a qualitative analysis of the site visit data, and a synthesis of the interviews with federal officials, SEA officials, and charter school authorizers, in addition to reviewing relevant federal laws and regulations. To obtain an alternative perspective, we also interviewed organizations representing charter schools and parents of students with disabilities in the communities of our site visit locations. This study was not intended to determine charter schools’ compliance with applicable federal requirements for educating students with disabilities. To compare enrollment levels of students with disabilities in charter schools and traditional public schools, we analyzed data for school years 2008-2009 and 2009-2010, the most recent data available at the time, from a custom data file provided to us by Education. To prepare the file, Education analysts extracted the data elements we specified from the department’s large-scale EDFacts data system. The custom data file includes counts of students with disabilities at the school-level, which are reported to EDFacts by SEAs through Education’s Data Exchange Network (EDEN) Submission System. This custom data file also includes the number of students with disabilities, aged 6-21, served both in charter schools and traditional public schools, disability type, the educational environment in which students with disabilities receive services, and whether each school is its own local educational agency (LEA) or part of a larger LEA. While we received data for school years 2008-2009 and 2009-2010, we decided to focus our analysis on data from school year 2009-2010 because states were required to submit more school-level information in 2009-2010 than in 2008-2009, and because we could not establish trends or patterns by analyzing only 2 years of data. We were able to distinguish charter schools from traditional public schools using the charter school indicator for each school included in the custom data file. We use the term “traditional public school” in order to distinguish between charter schools and other types of public schools included in the custom data file. For purposes of our analysis, traditional public schools include regular schools, special education schools, vocational education schools, alternative or other schools, and reportable programs. Charter schools may also be vocational schools or special education schools, for example, but we did not include school type variations as a variable in our analyses. The custom data file provided by Education includes counts of children who received special education and related services under the Individuals with Disabilities Education Act (IDEA) according to an Individualized Education Program (IEP), Individual Family Service Plan, or services plan. The data file contains an educational environment variable which provides more detail on the setting in which students receive special education and related services. The variable includes several response categories in addition to a regular classroom setting. For example, a small percentage of students with disabilities included in the custom data file were placed in settings other than a regular classroom such as a correctional facility, a residential facility, or a separate school. In addition, a very small percentage of students included in the custom data file were not “enrolled” in either a traditional public school or a charter school, but were homebound or in hospitals or were parentally placed in private schools. However, students in these types of settings may receive special education services from a traditional public school district or charter school LEA and may be included in a school’s student count. For example, in some states, parentally-placed students in private schools who are also receiving special education services through a regular public school are included in the child count for that public school by the LEA. This is done to avoid duplicating counts of students with disabilities who may receive special education services from more than one school. In order to calculate the total number of students enrolled in charter schools and traditional public schools, we obtained all schools’ total enrollment for school years 2008-2009 and 2009-2010 from Education’s Common Core of Data (CCD) and matched this information electronically to each of the schools in the custom data file, because the custom data file provides school-level counts of students with disabilities only, not total enrollment counts. In those instances where there was no match in CCD (697 cases), we excluded those schools from our analysis. Schools categorized as closed, inactive, or future schools, as well as charter schools with an enrollment level of zero (3,106 cases), were also excluded from our analysis. Matching schools’ total enrollment numbers from CCD to each of the schools in the custom data file allowed us to arrive at the total number of students enrolled at each individual school included in our analysis, as well as the total number of students enrolled in all charter schools and traditional public schools for those 41 states with operating charter schools. In some states, charter schools that are their own local educational agency (LEA) may operate more than one school or campus, often serving different grade levels. In our custom data file, some charter school LEAs operate more than one charter school, and schools within these charter school LEAs share the same LEA identifier. However, each school or campus within the LEA possesses a unique school identifier (see app. II for more information on charter schools’ LEA status). For purposes of our analysis, each campus with a unique school identifier counts as one school. For most of our analyses, the unit of analysis was students, rather than schools. We calculated the percentage of students with disabilities enrolled in charter schools and traditional public schools by adding the school-level counts of students with disabilities in charter schools and traditional public schools from the custom data file and by dividing by the total number of students enrolled in all charter schools and traditional public schools, respectively, using enrollment data from CCD. We also conducted additional analyses at the aggregate level based on cross- tabulations using the number of students with disabilities and variables such as disability type, and educational environment. In addition to the aggregate analysis on students with disabilities, we also analyzed enrollment levels of students with disabilities at the state-level, for those 41 states with operating charter schools in school year 2009- 2010. According to technical notes provided by Education, 27 states operated less than 100 charter schools. The availability and quality of the data in our custom data file vary by state. For example, some states that operated charter schools did not submit school-level data to Education on students with disabilities. In addition, while the percentages shown in figure 2 of the report were calculated using school-level data on students with disabilities, aggregations at the school-level do not always equal the aggregations at the LEA and state levels. For example, when states submit annual data on students with disabilities to Education, they are not required to submit school-level data for children with disabilities who are homebound or in hospitals, or for those students with disabilities who are parentally-placed in private schools. Therefore, in the custom data file, for those states that did not submit school-level data for children in these educational settings, total counts of students with disabilities at the school level were less than total counts at the LEA and state levels. For schools in the 41 states with operating charter schools in school year 2009-2010, data on counts of students with disabilities at the school-level were missing for 784 out of 4,895 charter schools (16 percent) and for 5,998 out of 80,671 traditional public schools (7 percent). Missing data represent both those schools that did not enroll any students with disabilities and therefore were not required to report information, as well as any schools that may have enrolled students with disabilities, but did not report the data. We were not able to distinguish between the two types of missing data. Tennessee and Utah—two states with operating charter schools— reported data on students with disabilities at the district and state levels, but did not report data on counts of students with disabilities at the school-level. Because our analysis was based on total counts of school- level data, data on students with disabilities in charter schools and traditional public schools were missing for these two states. Missing data for these two states combined represent 94 of the 784 charter schools with missing data, and 2,609 of the 5,998 traditional public schools with missing data. Because school-level data on counts of students with disabilities were missing for Tennessee and Utah, when calculating the percentages of students with disabilities in all charter schools and traditional public schools, we excluded total student enrollment numbers for charter schools and traditional public schools in these two states from our denominator when dividing by the total number of students enrolled in charter schools and traditional public schools. Similarly, for school year 2008-2009, we excluded total enrollment numbers for charter schools and traditional public schools in the District of Columbia, Mississippi, Rhode Island, and Tennessee because school-level data on counts of students with disabilities were missing. We reported information paying particular attention to tabulations based on small cell sizes or cross-tabulations of the same data by other variables, in such a way as to prevent direct or indirect disclosure of information that would allow the identification of particular students or schools. To prevent the potential for identifying personal information from the EDFacts custom data file, we only present data with categories that have a count of 10 or greater. If the number of cases is less than 10, the data were either suppressed or collapsed with other categories to create a count of 10 or greater. In addition to analyzing data on students with disabilities in charter schools and traditional public schools by disability type and educational environment, we also attempted to analyze the data at the metropolitan level and to include charter school LEA status as a variable in our cross- tabulations. However, data limitations and design issues prevented us from including findings at the metropolitan level and on charter schools’ LEA status in our report. For more information, see appendix II. To determine some of the factors that may contribute to differences in enrollment levels, we relied on conversations with representatives of charter school organizations and researchers, information learned during our site visits to charter schools and districts in three states, interviews with federal and state officials, and existing research on charter schools. We also interviewed individuals familiar with available research on the topic of students with disabilities in charter schools and identified research through these sources. For several of the factors cited in this report, much of the research we reviewed and information we received was based on anecdotal information, and information on factors contributing to differences in enrollment levels is inconclusive. For those studies with quantitative analyses on students with disabilities in charter schools, we did not conduct a methodological assessment of each study’s methodological quality, and therefore cannot confirm the reliability of these data. To examine how charter schools reach out to students with disabilities, the types of services charter schools provide, and any challenges they may face in doing so, we conducted site visits to a major metropolitan area in three states. We selected these locations on the basis of the number of charter schools in the state, a mix in LEA status and geographic diversity. Characteristics of the sites visited are summarized below. During the site visits, we interviewed officials from charter schools to obtain information about the special education services the charter school provides; the educational environment in which services are provided; challenges faced in providing services; and the charter school’s LEA status. We compared responses about LEA status and services provided to determine if LEA status is related to the types of services charter schools offer to students with disabilities. We also asked questions about outreach strategies, which provided us with information about whether schools are actively seeking to enroll students with disabilities. The findings of our analysis cannot be generalized to the charter school population or states with operating charter schools. To determine the role Education and other organizations play in ensuring students with disabilities’ access to charter schools, we reviewed relevant federal laws and regulations and interviewed Education, Department of Justice, and SEA officials, and charter school authorizers. At Education, we interviewed representatives from the Office of Special Education and Rehabilitative Services (OSERS), the Office for Civil Rights (OCR), the Office of Innovation and Improvement (OII), and the Office of Elementary and Secondary Education (OESE) regarding their responsibilities for oversight of states, school districts, and charter schools. Open ended questions were used to guide the discussions and the topics included policy or guidance concerning enrollment of students with disabilities in charter schools collaboration with other Education offices or Justice’s Civil Rights Division in providing guidance to charter schools about enrollment of students with disabilities, any assistance provided to charter schools to pool resources for serving students with more severe disabilities, any assistance provided to states concerning their monitoring of charter schools’ implementation of IDEA, and any research sponsored or supported concerning students with disabilities and charter schools. We also interviewed representatives of state and local charter school organizations and organizations representing parents of students with disabilities about their perspective on students with disabilities’ access to charter schools. In addition to conducting analyses at the aggregate level, we also attempted to analyze the data at the metropolitan level and to include charter school local educational agency (LEA) status as a variable in our cross-tabulations. However, data limitations and design issues prevented us from including findings at the metropolitan level and on charter schools’ LEA status in our report. Due to variation in charter school structure and policies across states, and because decisions about the placement of students with disabilities in charter schools, traditional public schools, or a separate facility of some type, are made at the school district level, and placement decisions vary according to students’ needs, aggregated data may mask differences in enrollment levels of students with disabilities in charter schools and traditional public schools at the metropolitan level. Therefore, in addition to an aggregate and state-level analysis for the 41 states with operating charter schools, we also attempted to analyze counts of students with disabilities at the school-level for selected metropolitan areas. However, several factors hindered us from conducting this type of analysis. Some of the metropolitan areas we considered were missing data on students with disabilities, while geographical issues presented challenges in other areas. Specifically, in states where charter schools are their own LEA, it was not always clear where the charter schools were physically located in the metropolitan area, and therefore difficult to determine which traditional public school district should serve as the appropriate basis of comparison. This is especially true for charter schools located in large metropolitan cities with more than one school district. In addition, charter schools’ service areas are not always as well defined as the boundaries for traditional public school districts, and charter schools may enroll students from different school districts across the entire metropolitan area, which also complicates designing this type of data analysis. We did, however, conduct an exploratory analysis of enrollment levels of students with disabilities in charter schools and traditional public schools for one metropolitan area. For this particular area, all of the charter schools are part of a larger LEA. To protect the privacy of students with disabilities, we have not disclosed the name of the metropolitan area. Results from our analysis showed that the percentage of students with disabilities in charter schools was lower than in traditional public schools. However, these results cannot be generalized to other metropolitan areas, and had we been able to conduct this type of analysis for several different locations based on variation in LEA structure and geographic location, our analysis may have produced mixed results. Charter school experts we spoke with also indicated that charter schools’ LEA status may affect enrollment levels of students with disabilities in charter schools because charter schools that are their own LEA have different responsibilities for serving students with disabilities than charter schools that are part of a traditional public school district. For example, traditional public school districts oversee the placement of students with disabilities in charter schools that are part of the school district and are often responsible for providing special education services for those charter schools, whereas charter schools that are their own LEA are legally responsible for providing or securing special education services themselves. According to an Education official, in addition to satisfying any Individualized Education Program eligibility requirements, for those charter schools that are their own LEA, the school also assumes the responsibility of enforcing least restrictive environment service provision requirements for students with disabilities, as well as acting as the responsible party during any due process hearings. Therefore, we also attempted to conduct an analysis including charter schools’ LEA status as a variable in our cross-tabulations to see how enrollment levels of students with disabilities may differ within the charter school population. However, several limitations prohibited us from analyzing information on LEA status in the findings section of this report, which we discuss below. Using the LEA identifier from the EDFacts custom data file, we were able to identify charter schools that are part of a larger, traditional public school district, as well as those individual charter schools that are their own LEA. However, even though a charter school may be its own LEA, depending on state law, the school may be part of a larger district for purposes of the Individuals with Disabilities Education Act (IDEA). Therefore, any type of analysis including charter schools’ LEA status may not necessarily provide meaningful insight into who is responsible for providing special education services or why enrollment levels of students with disabilities might differ in charter school LEAs and charter schools within a district. Furthermore, some undetermined proportion of charter school LEAs in our analysis operated more than one charter school or campus during school year 2009-2010. In our EDFacts custom data file, for these multicampus charter school arrangements, in some states multiple charter schools or campuses share the same LEA identifier. These multicampus charter school arrangements make it difficult to assign LEA status to each individual school or campus within a multicampus arrangement. While we learned that some states equate one LEA with one charter, we were not able to determine from the data whether or not these multicampus arrangements operated under one or more charters. Therefore, we could not determine whether these arrangements should count as one or more than one LEA. For the most part, we found that traditional public schools and charter schools served a similar distribution of students by disability type. More than 70 percent of students with disabilities in traditional public schools and charter schools had disabilities such as a specific learning disability, a speech or language impairment, or other health impairment, and both types of schools enrolled lower percentages of students with hearing, orthopedic, or visual impairments, for example (see fig. 7). However, when comparing the distribution of students with certain disabilities, such as students with an emotional disturbance or a specific learning disability, the percent was higher in charter schools than traditional public schools. Sherri Doughty, Assistant Director; Sara Edmondson, Analyst-in-Charge; Meredith Moore; Jason Palmer, Susannah Compton, Luann Moy, Ying Long, Amy Sweet, Sheila McCoy, James Rebbe, and James Bennett also made significant contributions to this report.
While the number of charter schools is growing rapidly, questions have been raised about whether charter schools are appropriately serving students with disabilities. GAO was asked: (1) How do enrollment levels of students with disabilities in charter schools and traditional public schools compare, and what is known about the factors that may contribute to any differences? (2) How do charter schools reach out to students with disabilities and what special education services do charter schools provide? (3) What role do Education, state educational agencies, and other entities that oversee charter schools play in ensuring students with disabilities have access to charter schools? GAO analyzed federal data on the number and characteristics of students with disabilities; visited charter schools and school districts in three states selected on the basis of the number of charter schools in the state, among other things; and interviewed representatives of federal, state, and other agencies that oversee charter schools. Charter schools enrolled a lower percentage of students with disabilities than traditional public schools, but little is known about the factors contributing to these differences. In school year 2009-2010, which was the most recent data available at the time of our review, approximately 11 percent of students enrolled in traditional public schools were students with disabilities compared to about 8 percent of students enrolled in charter schools. GAO also found that, relative to traditional public schools, the proportion of charter schools that enrolled high percentages of students with disabilities was lower overall. Specifically, students with disabilities represented 8 to 12 percent of all students at 23 percent of charter schools compared to 34 percent of traditional public schools. However, when compared to traditional public schools, a higher percentage of charter schools enrolled more than 20 percent of students with disabilities. Several factors may help explain why enrollment levels of students with disabilities in charter schools and traditional public schools differ, but the information is anecdotal. For example, charter schools are schools of choice, so enrollment levels may differ because fewer parents of students with disabilities choose to enroll their children in charter schools. In addition, some charter schools may be discouraging students with disabilities from enrolling. Further, in certain instances, traditional public school districts play a role in the placement of students with disabilities in charter schools. In these instances, while charter schools participate in the placement process, they do not always make the final placement decisions for students with disabilities. Finally, charter schools’ resources may be constrained, making it difficult to meet the needs of students with more severe disabilities. Most of the 13 charter schools GAO visited publicized and offered special education services, but faced challenges serving students with severe disabilities. Most charter school officials said they publicized the availability of special education services in several ways, including fliers and placing ads in the local newspaper. Many charter schools GAO visited also reported tailoring special education services to individuals’ needs, but faced challenges serving students with severe disabilities due to insufficient resources. About half of the charter school officials GAO interviewed cited insufficient resources, including limited space, as a challenge. The U.S. Department of Education’s (Education) Office for Civil Rights has undertaken two compliance reviews related to charter schools’ recruitment and admission of students with disabilities in three states, but has not issued recent guidance covering admission practices in detail, nor has Education conducted recent research about factors affecting lower enrollment in charter schools. The three states GAO visited already have taken steps to monitor charter schools’ admission practices. In addition, officials in these three states reported prohibiting disability-related questions on charter school admission forms, in part to protect students with disabilities’ access. GAO recommends that the Secretary of Education take measures to help charter schools recognize practices that may affect enrollment of students with disabilities by updating existing guidance and conducting additional fact finding and research to identify factors affecting enrollment levels of these students in charter schools. Education agreed with our recommendations.
For fiscal year 2009, we identified 47 federally funded employment and training programs administered across nine agencies, primarily the Departments of Labor, Education, and Health and Human Services (HHS) (for a list of programs and agencies, see appendix I). These programs reported spending approximately $18 billion on employment and training services in fiscal year 2009. Seven programs accounted for about three- fourths of this spending, including the WIA Adult, Dislocated Worker, and Youth programs, which spent nearly $6 billion on employment and training services (see table 1). Most participants received employment and training services through one of two programs: Employment Service/Wagner- Peyser Funded Activities (Employment Service) and WIA Adult. Together, these two programs reported serving over 18 million individuals, or about 77 percent of the total number of participants served across all programs. Almost all programs overlap with at least one other program, but differences may exist in eligibility, objectives, and service delivery. Forty- four of the 47 programs, which include broad multipurpose block grants, overlap with at least one other program, in that they provide at least one similar service to a similar population. Some of these overlapping programs serve multiple population groups, while others target specific populations, and some programs require participants to be economically disadvantaged. The target populations being served by the most programs are Native Americans, veterans, and youth. For example, all 8 programs that target Native Americans provide seven similar types of employment and training services. However, some individuals within a population group may be eligible for one program, but not another because program eligibility criteria differ. One of the programs targeting Native Americans, for example, serves only disabled Native Americans residing on or near a federal or state reservation, and another program serves only Native Hawaiians. Some efforts have been made to address overlap in programs and services. Officials from 27 of the 47 programs reported that their agencies have coordinated efforts with other federal agencies that provide similar services to similar populations. For example, Labor and HHS issued a joint letter encouraging state-administered youth programs to partner together using funds under the American Recovery and Reinvestment Act of 2009 (Recovery Act) to promote subsidized employment opportunities. In addition, an official from the Department of the Interior reported that the agency works with Labor and HHS to coordinate programs for Native Americans. Under law, Native American tribes are allowed significant flexibility to combine funding from multiple programs. Moreover, as part of its proposed WIA reforms, the Administration is proposing consolidating 4 employment and training programs administered by Education into 1 program. The Administration also proposes consolidating Education’s Career and Technical Education – Basic Grants to States and Tech-Prep Education programs, at the same time reducing program funding. In addition, the budget proposal would transfer the Senior Community Service Employment Program from Labor to HHS. Three of the largest programs maintain separate administrative structures to provide some of the same services. The Temporary Assistance for Needy Families (TANF), Employment Service, and WIA Adult programs provide some of the same employment and training services—such as job search and job referral services—to low-income individuals, although there are differences between the programs (see figure 1). The TANF program serves low-income families with children, while the Employment Service and WIA Adult programs serve all adults, including low-income individuals. All three programs share a common goal of helping individuals secure employment, and the TANF and WIA Adult programs also aim to reduce welfare dependency. However, employment is only one aspect of the TANF program, which also has three other broad social service goals: to assist needy families so that children can generally be cared for in their own homes, to reduce and prevent out-of-wedlock pregnancies, and to encourage the formation and maintenance of two- parent families. As a result, TANF provides a wide range of other services beyond employment and training, including cash assistance. Although the extent to which individuals receive the same employment and training services from TANF, the Employment Service, and WIA Adult is unknown, the programs maintain separate administrative structures to provide some of the same services to low-income individuals. Data limitations make it difficult to assess duplication of services, but Labor officials estimate that in program year 2008 approximately 4.5 percent of all WIA Adult participants who received training—about 4,500 of the nearly 100,000 participants who exited the program—were also receiving TANF. However, it is unclear whether the WIA Adult participants who self- identify as TANF recipients have received TANF employment and training services. Nonetheless, the three programs maintain separate administrative structures. At the federal level, the TANF program is administered by HHS, and the Employment Service and WIA Adult programs are administered by Labor. At the state level, the TANF program is typically administered by state human services or welfare agencies, while the other two programs are typically administered by state workforce agencies. At the local level, Employment Service and WIA Adult services are generally provided through the one-stop centers, while TANF employment and training services may be administered through the one- stop or through other structures. Federal agency officials acknowledged that greater administrative efficiencies could be achieved in delivering these services, but also said that other factors, such as the proximity of services to clients, could warrant having multiple entities providing the same services. Congress passed WIA partly in response to concerns about fragmentation and inefficiencies in federal employment and training programs. WIA authorized several employment and training programs—including Job Corps and programs for Native Americans, migrant and seasonal farmworkers, and veterans—as well as the Adult Education and Literacy program. WIA replaced the Job Training Partnership Act (JTPA) programs for economically disadvantaged adults and youths and dislocated workers with three new programs—WIA Adult, WIA Dislocated Worker, and WIA Youth. The Adult and Dislocated Worker programs provide three tiers, or levels, of service: core, intensive, and training. Core services include basic services such as job search assistance and labor market information and they may be self-service in nature. Intensive services may include such activities as comprehensive assessment and case management—activities that require greater staff involvement. Training services may include occupational skills or on-the-job training. Beyond authorizing these programs, WIA also established one-stop centers in all local areas and mandated that many federal employment and training programs provide services through the centers. Under WIA, sixteen different categories of programs, administered by four federal agencies, must provide services through the one-stop system, according to Labor officials. Thirteen of these categories include programs that meet our definition of an employment and training program, and three categories do not, but offer other services to jobseekers who need them (see figure 2). These thirteen program categories represent about 40 percent of the federal appropriations for employment and training programs in fiscal year 2010. Figure 2. Categories of Programs Required to Provide Services Through the One-Stop System and Related Federal Agencies D e pt. of E■ Adlt Edtion nd Litercy ■ WIA Adlt, Yoth, nd Dilocted Worker ■ Voctionl Edtion (Perkin Act) One-stop centers serve as the key access point for a range of services that help unemployed workers re-enter the workforce—such as job search assistance, skill assessment and case management, occupational skills and on-the-job training, basic education and literacy training, as well access to Unemployment Insurance (UI) benefits and other supportive services— and they also assist employers in finding workers. Any person visiting a one-stop center may look for a job, receive career development services, and gain access to a range of vocational education programs. In our 2007 study, we found that a typical one-stop center in many states offered services for 8 or 9 required programs on-site, and one state offered services for 16 required programs on-site. In addition to required programs, one-stop centers have the flexibility to include other, optional programs in the one-stop system, such as TANF, the Supplemental Nutrition Assistance Program (SNAP) Employment and Training Program, or other community-based programs, which helps them better meet specific state and local workforce development needs. The Dayton, Ohio one-stop center, for example, boasts over 40 programs on- site at the 8-1/2 acre facility, including an organization that provides free business attire to job seekers who need it, an alternative high school program that assists students in obtaining a diploma, and organizations providing parenting and self-sufficiency classes. Under WIA, services may also be provided at affiliated sites—designated locations that provide access to at least one employment and training program. While WIA requires certain programs to provide services through the one- stop system, it does not provide additional funds to operate one-stop systems and support one-stop infrastructure. As a result, required programs are expected to share the costs of developing and operating one- stop centers. In 2007, we reported that WIA programs and the Employment Service program were the largest funding sources states used to support the infrastructure—or nonpersonnel costs—of their comprehensive one- stop centers. For program year 2005, of the 48 states that could provide funding information, 23 states identified WIA programs as the primary funding source and 19 states reported it was the Employment Service program. In addition, 27 states reported using TANF funds to pay for part of their one-stop center infrastructure costs, and 3 states identified TANF as the primary funding source. In 2007, TANF was on-site at a typical one- stop in 30 states. One-stop centers required under WIA provide an opportunity for a broad array of federal employment and training programs—both required and optional programs—to coordinate their services and avoid duplication. Although WIA does not require that programs be colocated within the one- stop center, this is one option that programs may use to provide services within the one-stop structure. Labor’s policy is to encourage colocation of all required programs to the extent possible; however, officials acknowledged that colocation is one of multiple means for achieving service integration. We previously reported that colocating services can result in improved communication among programs, improved delivery of services for clients, and elimination of duplication. While colocation does not guarantee efficiency improvements, it affords the potential for sharing resources and cross-training staff, and may lead, in some cases, to the consolidation of administrative systems, such as information technology systems. Our early study of promising one-stop practices found that the centers nominated as exemplary did just that—they cross-trained program staff, consolidated case management and intake procedures across multiple programs, and developed shared data systems. Other types of linkages between programs, such as electronic linkages or referrals, may not result in the same types of efficiency improvements, but they may still present opportunities to streamline services. Consolidating administrative structures and colocating services may increase efficiencies, but implementation could pose challenges. Florida, Texas, and Utah have consolidated their workforce and welfare agencies and officials said that this reduced costs and improved the quality of services for participants, but they could not provide a dollar figure for cost savings. Even when states consolidate their agencies, they must still follow separate requirements for individual programs. With regard to colocating services, WIA Adult and the Employment Service are generally colocated in one-stop centers, but TANF employment and training services are colocated in one-stops to a lesser extent. Efforts to increase colocation could prove challenging due to issues such as limited available office space, differences in client needs and the programs’ client service philosophies, and the need for programs to help fund the operating costs of the one-stop centers. While states and localities have undertaken some potentially promising initiatives to achieve greater administrative efficiencies, little information is available about the strategies and results of these initiatives, so it is unclear the extent to which practices in these states could serve as models for others. Moreover, little is known about the incentives states and localities have to undertake such initiatives and whether additional incentives may be needed. We recently recommended that the Secretaries of Labor and HHS work together to develop and disseminate information that could inform such efforts, including information on state initiatives to consolidate program administrative structures and state and local efforts to colocate additional programs at one-stop centers. As part of this effort, we recommended that Labor and HHS examine the incentives for states and localities to undertake such initiatives and, as warranted, identify options for increasing them. In their responses, Labor and HHS agreed with our recommendations. In addition, GAO is currently examining innovative one-stop strategies to enhance collaboration with employers and economic development partners to better meet local labor market needs. To the extent that colocating services and consolidating administrative structures reduce administrative costs, funds could potentially be available to serve more clients or for other purposes. For the TANF program alone, GAO estimated that states spent about $160 million to administer employment and training services in fiscal year 2009. According to a Department of Labor official, the administrative costs for the WIA Adult program were at least $56 million in program year 2009. Officials told GAO they do not collect data on the administrative costs associated with the Employment Service program, as they are not a separately identifiable cost in the legislation. Labor officials said that, on average, the agency spends about $4,000 for each WIA Adult participant who receives training services. Making informed decisions about where to invest scarce resources requires information about what’s working and what’s not but, despite improvements, performance data do not provide a complete picture of the employment and training system. Nearly all employment and training programs track multiple outcome measures and many programs track similar measures—most often an “entered employment” rate (the number of participants who found jobs), employment retention, and wage gain or change. We have made a number of recommendations regarding the performance management systems of the key employment and training programs, and Labor has made some progress addressing our concerns. However, two issues remain. First, only a small proportion of job seekers who receive services at one-stops are reflected in WIA outcome data. While customers who use self-services are estimated to be the largest portion of those served under WIA, job seekers who receive self-service or informational services are specifically excluded from performance calculations by the statute. Second, WIA’s performance measurement system contains no provision for measuring overall one-stop performance, relying instead on a program-by-program approach that cannot easily be used to assess the overall performance of the one-stop system. Information about the effectiveness of these programs can also help guide policymakers and program managers in making decisions about how to improve, coordinate, or consolidate existing programs. However, little is known about the effectiveness of employment and training programs because only 5 of the 47 programs reported that they had conducted any impact studies since 2004. Impact studies, which allow for determining the extent to which a program is causing participant outcomes, can be difficult and expensive to conduct because they take steps to examine what would have happened in the absence of a program to isolate its impact from other factors. Such studies may not be cost-effective for smaller programs, particularly in periods of tight budgets, but strategically chosen impact studies can be an important means for understanding where efficiencies can be achieved. Labor has been slow to comply with a requirement to conduct a multi-site control group evaluation of the WIA- funded programs. In 2004 and 2007, we recommended that Labor comply with the requirements of the law and conduct an impact evaluation of WIA services to better understand what services are most effective for improving outcomes. Since then, Labor has completed a nonexperimental study of the WIA Adult and Dislocated Worker programs and also has an experimental design impact study of these programs currently under way. The nonexperimental study found that the WIA Adult program had positive impacts on average earnings up to 4 years after participant entry, but noted that the magnitude of this effect could have been due to the selection of applicants with greater income prior to participation and better job prospects. The study found that the impacts for participants in the Dislocated Worker program were also positive, but smaller. Labor expects that the experimental design impact study currently underway will examine impact by funding stream, but will not be completed until June 2015. Understanding how well the one-stop system is reducing fragmentation through coordinated service delivery would be useful in deciding where efficiencies could be achieved, but no study has been undertaken to evaluate the effectiveness of the one-stop system approach. While a few program impact studies have been done or are underway, these studies largely take a program-by-program approach rather than focusing on understanding which approaches are most effective in streamlining service delivery and improving one-stop efficiency. In addition, Labor’s efforts to collaborate with other agencies to assess the effects of different strategies to integrate job seeker services have been limited. We previously recommended that Labor collaborate with Education, HHS, and HUD to develop a research agenda that examines the impacts of various approaches to program integration on job seeker and employer satisfaction and outcomes. Labor has committed to collaborating with other agencies and has involved them in developing inter-agency initiatives for certain targeted activities, but has not yet evaluated the effectiveness of the one-stop system. In January 2011, the President signed the GPRA Modernization Act of 2010 (GPRAMA), further amending the almost two-decades-old Government Performance and Results Act of 1993 (GPRA). Implementing provisions of the new act—such as its requirement to establish outcome-oriented goals covering a limited number of crosscutting policy areas—could play an important role in clarifying desired outcomes, addressing program performance spanning multiple organizations, and facilitating future actions to reduce unnecessary duplication, overlap, and fragmentation. Specifically, GPRAMA requires (1) disclosure of information about accuracy and validity, (2) data on crosscutting areas, and (3) quarterly reporting on priority goals on a publicly available Web site. Additionally, GPRAMA significantly enhances requirements for agencies to consult with Congress when establishing or adjusting governmentwide and agency goals. This information can inform deliberations on spending priorities and help re-examine the fundamental structure, operation, funding, and performance of a number of federal programs. However, to be successful, it will be important for agencies to build the analytical capacity to both use the performance information, and to ensure its quality—both in terms of staff trained to do the analysis and availability of research and evaluation resources. In conclusion, removing and preventing unnecessary duplication, overlap, and fragmentation among federal employment and training programs is clearly challenging. These are difficult issues to address because they may require agencies and Congress to re-examine within and across various mission areas the fundamental structure, operation, funding, and performance of a number of long-standing federal programs and activities. Implementing provisions of GPRAMA could play an important role in clarifying desired outcomes, addressing program performance spanning multiple organizations, and facilitating future actions to reduce unnecessary duplication, overlap, and fragmentation. Sustained attention and oversight by Congress will also be critical. Our work highlights two key areas where congressional oversight could facilitate progress: Enhancing program evaluations and performance information; and Fostering state and local innovation in integrating services and consolidating administrative structures. As the nation rises to meet its current fiscal challenges, GAO will continue to assist Congress and federal agencies in identifying actions needed to address these issues. Likewise, we will continue to monitor developments in the areas we have already identified. Chairman Rehberg, Ranking Member DeLauro, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For further information regarding this testimony, please contact me at (202) 512-7215 or sherrilla@gao.gov. Individuals making key contributions to this testimony include Dianne Blank, Caitlin Croake, Pamela Davidson, Patrick Dibattista, Alex Galuten, Andrew Nelson, Paul Schearf, and Kathleen Van Gelder. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-441T. Washington, D.C.: March 3, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Multiple Employment and Training Programs: Providing Information on Colocating Services and Consolidating Administrative Structures Could Promote Efficiencies. GAO-11-92. Washington, D.C.: January 13, 2011. Workforce Investment Act: Labor Has Made Progress in Addressing Areas of Concern, but More Focus Needed on Understanding What Works and What Doesn’t. GAO-09-396T. Washington, D.C.: February 26, 2009. Workforce Development: Community Colleges and One-Stop Centers Collaborate to Meet 21st Century Workforce Needs. GAO-08-547. Washington, D.C.: May 15, 2008. Workforce Investment Act: One-Stop System Infrastructure Continues to Evolve, but Labor Should Take Action to Require That All Employment Service Offices Are Part of the System. GAO-07-1096. Washington, D.C.: September 4, 2007. Workforce Investment Act: Additional Actions Would Further Improve the Workforce System. GAO-07-1051T. Washington, D.C.: June 28, 2007. Workforce Investment Act: Substantial Funds Are Used for Training, but Little is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. Workforce Investment Act: Labor Actions Can Help States Improve Quality of Performance Outcome Data and Delivery of Youth Services. GAO-04-308. Washington, D.C.: February 23, 2004. Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing is Needed. GAO-03-725. Washington, D.C.: June 18, 2003. Multiple Employment and Training Programs: Funding and Performance Measures for Major Programs. GAO-03-589. Washington, D.C.: April 18, 2003. Workforce Investment Act: States’ Spending Is on Track, but Better Guidance Would Improve Financial Reporting. GAO-03-239. Washington, D.C.: November 22, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Multiple Employment and Training Programs: Overlapping Programs Indicate Need for Closer Examination of Structure. GAO-01-71. Washington, D.C.: October 13, 2000. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO/T-HEHS-00-145. Washington, D.C.: June 29, 2000. Multiple Employment Training Programs: Information Crosswalk on 163 Employment Training Programs. GAO/HEHS-95-85FS. Washington, D.C.: February 14, 1995. Multiple Employment Training Programs: Major Overhaul Needed to Reduce Costs, Streamline the Bureaucracy, and Improve Results. GAO/T-HEHS-95-53. Washington, D.C.: January 10, 1995. Multiple Employment Training Programs: Overlap Among Programs Raises Questions About Efficiency. GAO/HEHS-94-193. Washington, D.C.: July 11, 1994. Multiple Employment Training Programs: Conflicting Requirements Underscore Need for Change. GAO/T-HEHS-94-120. Washington, D.C.: March 10, 1994. Multiple Employment and Training Programs: Major Overhaul is Needed. GAO/T-HEHS-94-109. Washington, D.C.: March 3, 1994. Multiple Employment Training Programs: Overlapping Programs Can Add Unnecessary Administrative Costs. GAO/HEHS-94-80. Washington, D.C.: January 28, 1994. Multiple Employment Training Programs: Conflicting Requirements Hamper Delivery of Services. GAO/HEHS-94-78. Washington, D.C.: January 28, 1994. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the findings from our recent work on fragmentation, overlap, and potential duplication in federally funded employment and training programs and our prior work on the Workforce Investment Act of 1998 (WIA). We recently issued two reports addressing fragmentation, overlap, and potential duplication in federal programs--one that outlined opportunities to reduce potential duplication across a wide range of federal programs and another that focused more specifically on employment and training programs. This work and our larger body of work in the area will help government policymakers address the rapidly building fiscal pressures facing our nation's government--pressures that stem, in part, from our mounting debt and sustained high unemployment. Our work to examine fragmentation, overlap, and potential duplication in employment and training programs has a long history. As early as the 1990s we issued a series of reports that raised questions about the efficiency and effectiveness of the federally funded employment and training system, and we concluded that a structural overhaul and consolidation of these programs was needed. Partly in response to these concerns, Congress passed WIA. The purpose of WIA, in part, was to transform the fragmented employment and training system into a coherent one, establishing a one-stop system that serves the needs of job seekers and employers. Since WIA was enacted, we have issued numerous reports that have included recommendations regarding many aspects of WIA, such as performance measures and accountability, one-stop centers, and training, among other topics. GAO's work has continued to find fragmentation, overlap, and potential duplication in employment and training programs. The area is characterized by a large number of programs with similar goals, beneficiaries, and allowable activities that are administered by multiple federal agencies. Fragmentation of programs exists when programs serve the same broad area of national need but are administered across different federal agencies or offices. Program overlap exists when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. Overlap and fragmentation among government programs or activities can be harbingers of unnecessary duplication. Given the challenges associated with fragmentation, overlap, and potential duplication, careful, thoughtful actions will be needed to address these issues. This testimony discusses (1) what GAO has found regarding fragmentation, overlap, and duplication in federal employment and training programs, (2) the role that WIA activities can play in addressing these conditions, and (3) what additional information could help Congress minimize fragmentation, overlap, and duplication among these programs. In summary, for fiscal year 2009, GAO identified 47 federally funded employment and training programs administered across nine agencies. Almost all of these programs overlap with at least one other program in that they provide at least one similar service to a similar population, but differences may exist in eligibility, objectives, and service delivery. WIA's structure provides the opportunity to reduce overlap and duplication because it requires that several of these programs provide services through the one-stop system, but they need not be on-site. Increasing colocation at one-stop centers, as well as consolidating state workforce and welfare administrative agencies could increase efficiencies, and several states and localities have undertaken such initiatives. To facilitate further progress in increasing administrative efficiencies, we recommended that the Secretaries of Labor and Health and Human Services (HHS) work together to develop and disseminate information about such efforts. Sustained congressional oversight is pivotal in addressing issues of fragmentation, overlap, and potential duplication. Specifically, Congress could explore opportunities to enhance program evaluations and performance information and foster state and local innovation in integrating services and consolidating administrative structures.
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to share our observations on the principal methodological similarities and differences of three reports on bankruptcy debtors’ ability to pay their debts. These reports endeavor to address an important public policy issue—whether some proportion of debtors who file for personal bankruptcy have sufficient income, after expenses, to pay a “substantial” portion of their debts. The three reports were issued by the Credit Research Center (Credit Center), Ernst & Young, and Creighton University/American Bankruptcy Institute (ABI). Last year we reported on our analyses of the Credit Center and Ernst & Young reports. It is important to emphasize that our review of the ABI study is still underway. Consequently, it is too early for us to discuss the results of our analysis of the ABI report. Our objective in reviewing each of these reports has been the same—to assess the strengths and limitations, if any, of the report’s assumptions and methodology for determining debtors’ ability to pay and the amount of debt that debtors could potentially repay. We have used the same criteria to review each report. year repayment plan would successfully complete the plans—an assumption that historical experience suggests is unlikely. However, the reports have some methodological differences, including different (1) groupings of the types of debts that could be repaid; (2) gross income thresholds used to identify those debtors whose repayment capacity was analyzed, (3) assumptions about debtors’ allowable living expenses, (4) treatment of student loans that debtors had categorized as unsecured priority debts; and (5) and assumptions about administrative expenses. The remainder of my statement discusses in greater detail the similarities and differences in the findings and methodologies of the three reports. A summary of these similarities and differences is found in attachment I. Debtors who file for personal bankruptcy usually file under chapter 7 or chapter 13 of the bankruptcy code. Generally, debtors who file under chapter 7 of the bankruptcy code seek a discharge of their eligible dischargeable debts. Debtors who file under chapter 7 may voluntarily reaffirm—that is, voluntarily agree to repay—any of their eligible dischargeable debts. Debtors who file under chapter 13 submit a repayment plan, which must be confirmed by the bankruptcy court, for paying all or a portion of their debts over a period not to exceed 3 years unless for cause the court approved a period not to exceed 5 years. Personal bankruptcy filings have set new records in each of the past 3 years, although there is little agreement on the causes for such high bankruptcy filings in a period of relatively low unemployment, low inflation, and steady economic growth. Nor is there agreement on (1) the number of debtors who seek relief through the bankruptcy process who have the ability to pay at least some of their debts and (2) the amount of debt such debtors could repay. Several bills have been introduced in the 105th and 106th Congresses that would implement some form of “needs-based” bankruptcy. Each of these bills includes provisions for determining when a debtor could be required to file under chapter 13, rather than chapter 7. Currently, the debtor generally determines whether to file under chapter 7 or 13. Generally, these bills would establish a “needs-based” test, whose specific provisions vary among the bills. H.R. 3150, the bill used in the Ernst & Young and ABI analyses, would require a debtor whose gross monthly income met a specified income threshold to file under chapter 13 if the debtor’s net monthly income after allowable expenses was more than $50 and would be sufficient to pay 20 percent of the debtor’s unsecured nonpriority debt over a 5-year period. Debtors who did not meet these criteria would be permitted to file under chapter 7. Under the bankruptcy code, a debtor’s debts may be grouped into three general categories for the purposes of determining creditor payment priority: (1) secured debts, for which the debtor has pledged collateral, such as home mortgage or automobile loans; (2) unsecured priority debt, such as child support, alimony, and certain taxes; and (3) unsecured nonpriority debt, such as credit card debts. In analyzing debtors’ ability to pay, the three reports have focused principally on the percentage of total unsecured nonpriority debt that debtors could potentially repay. The Credit Center, Ernst & Young, and ABI reports have each attempted to estimate (1) how many debtors who filed under chapter 7 may have had sufficient income, after expenses, to repay a “substantial” portion of their debts, and (2) what proportion of their debts could potentially be repaid. Each of the reports used to some degree data from the financial schedules that debtors file with their bankruptcy petitions. Although these schedules are the only source of the detailed data needed for an analysis of debtors’ repayment capacity, the data in the schedules are of unknown accuracy and reliability. There are no empirical studies of the accuracy and reliability of the data debtors’ report in their financial schedules, and the National Bankruptcy Review Commission’s report recommended that these schedules be randomly audited. schedules, would remain unchanged over the 5-year repayment period. Historically, only about one-third of chapter 13 debtors have successfully completed their repayment plans, suggesting that for two-thirds of debtors something changed between the time the plans were confirmed by the bankruptcy court and the time the actual repayment plan was to be successfully completed. The three reports focus on the potential debt that debtors could repay should more debtors be required to file under chapter 13. However, should the number of debtors who file under chapter 13 increase, there would also be additional costs for bankruptcy judges and administrative support requirements that would be borne by the government. This is because bankruptcy judges would be involved in debtor screening to a greater extent than they are now and chapter 13 cases require more judicial time than chapter 7 cases do. None of the reports estimated these additional costs, although the ABI report acknowledges that such additional costs could accompany means-testing of bankruptcy debtors. In addition, the Religious Liberty and Charitable Donation Protection Act of 1998 permits chapter 13 bankruptcy debtors to include certain charitable deductions of up to 15 percent of their annual gross income in their allowable living expenses. The implementation of this statute could affect the estimates in each of the three reports. The potential effect could be to reduce (1) the number of bankruptcy debtors who could be required under the “needs- based” tests to file under chapter 13 or (2) the amount of debt repaid to unsecured nonpriority creditors by those debtors who are required to file under chapter 13. The act was enacted after the Credit Center and Ernst & Young issued their reports. The ABI report noted the act could effect the results of debtor means-testing, but did not attempt to apply the act to its sample of debtors. The reports differed in the types of debts that they estimated debtors could repay, their sampling methods, the calendar period from which each report’s sample cases were selected, and the assumptions used to estimate debtors’ allowable living expenses and debt repayments. The ABI report classified student loans differently than the other two reports. We have not analyzed the impact these differences may have had on each report’s findings and conclusions. The Credit Center report estimated the percentage of chapter 7 debtors who could repay a percentage of their “nonhousing, nonpriority debt.” These debts included secured nonhousing debt and unsecured nonprority debt. The Credit Center estimated that 30 percent of the chapter 7 debtors in its sample could repay at least 21 percent of their nonhousing, nonpriority debts, after deducting from their gross monthly income monthly mortgage payments and monthly living expenses. The Ernst & Young and ABI reports estimated the proportion of debtors who had sufficient income, after living expenses, to repay over a 5-year repayment period: • all of their nonhousing secured debt, such as automobile loans (debtors’ payments on home mortgage debt were included in the debtors’ living expenses); • all of their secured priority debts, such as back taxes, alimony, and child support (child support and alimony payments were assumed to continue for the full 5-year payment period unless otherwise noted in the debtors’ financial schedules); and • at least 20 percent of their unsecured nonpriority debts. The Ernst & Young and ABI reports estimated that 15 percent and 3.6 percent, respectively, of the chapter 7 debtors in their individual samples met all of these criteria. chapter 7 case filings from calendar year 1995 in 7 judgmentally selected districts. The Credit Center and ABI reports have one district—Northern Georgia—in common. It is possible that there are differences in each sample’s debtor characteristics that could affect each report’s estimate of debtor repayment capacity. The differences could result from the different time periods and the different sampling methods for selecting districts and filers within each district. Such differences, should they exist, could have affected each report’s estimate of the percentage of chapter 7 debtors who could potentially repay a substantial portion of their debts and how much they could repay. Both the Credit Center and Ernst & Young reports assumed that debtors would incur no additional debt during the 5-year repayment period. The ABI report assumed that debtors could potentially incur expenses for major repairs or replacement of automobiles during the course of the 5- year repayment plan, but that they would incur no other additional debt. The Credit Center report was completed before H.R. 3150 was introduced, and its repayment capacity analysis was not based on any specific proposed legislation. The Credit Center report analyzed the repayment capacity of all the chapter 7 debtors in its sample, regardless of their annual gross income. The Ernst & Young and ABI report used the “needs-based” provisions of different versions of H.R. 3150 as the basis for their analysis of debtor repayment capacity. H.R. 3150 passed the House in June 1998. Under the provisions of H.R. 3150 as introduced and as it passed the House, debtors must pass three tests to be required to file under chapter 13: • debtors must have monthly gross income that exceeds a set percentage of the national median income for households of comparable size (debtors below this threshold are presumed to be eligible to file under chapter 7); • debtors must have income of more than $50 per month after allowable living expenses and payments on secured and unsecured priority debts; and, • debtors could repay at least 20 percent of their unsecured nonpriority debts over a 5-year period if they used this remaining income for such payments. passed by the House of Representatives. The principal effect of using the two different versions of H.R. 3150 was that each report used a different threshold of gross annual income to screen debtors for further repayment analysis. In the Ernst & Young analysis, debtors whose gross annual income was 75 percent or less of the national median income for a household of comparable size were deemed eligible for chapter 7. Debtors whose gross annual income was more than 75 percent of the national median household income were subject to further analysis of their repayment capacity. In the ABI report’s analysis, debtors whose gross annual income was at least 100 percent of the national median income for households of comparable were subject to further repayment analysis. The three reports used different estimates of debtors’ allowable living expenses. The Credit Center report established its own criteria for debtors’ living expenses. Basically, the Credit Center’s analysis used the debtor’s living expenses as reported on the debtor’s schedule of estimated monthly living expenses. The Ernst & Young and ABI reports used the Internal Revenue Service’s (IRS) Financial Collection Standards, as specified in H.R. 3150. However, Ernst & Young and ABI interpreted them somewhat differently. The principal difference was for transportation expenses. Ernst & Young did not include an automobile ownership allowance for debtors who leased cars or whose cars were debt-free. ABI included an ownership allowance for leased cars and for debtors with debt-free cars. The ABI report noted that this difference in allowable transportation expenses accounted for “a substantial part” of the difference between the ABI and Ernst & Young estimates of the percentage of chapter 7 debtors who could potentially repay at least 20 percent of their unsecured nonpriority debt. ABI also deducted from the debtors’ total unsecured priority debt the value of any student loans and added the value of these loans to debtors total unsecured nonpriority debt. To the extent this was done, it had the effect of freeing debtor income to pay unsecured nonpriority debt. Finally, the ABI report assumed that administrative expenses, such as the trustee fee, would consume about 5.6 percent of debtors’ nonhousing payments to creditors under a 5-year repayment plan. The Credit Center and Ernst & Young reports assumed that none of the debtors’ payments would be used for administrative expenses, but that 100 percent of debtors’ payments would be used to pay creditors. the number of debtors who would be required to file under chapter 13 and the amount of debt that such debtors could potentially repay. However, the assumptions and data used in these reports lead to different estimates of debtors’ repayment capacity and require the reader to use caution in interpreting and comparing the results of each report. The actual number of chapter 7 debtors who could repay at least a portion of their nonhousing debt could be more or less than the estimates in these studies. Similarly, the amount of debt these debtors could potentially repay could also be more or less than the reports estimated. We agree that there are likely some debtors who file for bankruptcy under chapter 7 who have the financial ability to repay at least a portion of their debt, and that those who are able to repay their debts should do so. But we believe that more research is needed to verify and refine the estimates of debtors’ repayment capacity to better inform policymakers. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have.
Pursuant to a congressional request, GAO discussed the principal methodological similarities and differences of three reports on bankruptcy debtors' ability to pay their debts. The three reports were issued by the Credit Research Center, Ernst & Young, and Creighton University/American Bankruptcy Institute (ABI). GAO noted that: (1) the Credit Center report estimated that 30 percent of the chapter 7 debtors in its sample could pay at least 21 percent of their nonhousing, nonpriority debt, after deducting their mortgage debt payments and living expenses (exclusive of debt payments); (2) Ernst & Young and ABI estimated that 15 percent and 3.6 percent, respectively, of the debtors in their individual samples had sufficient income, after deducting allowable living expenses, to pay all of their nonhousing secured debts, all of their unsecured priority debts, and at least 20 percent of their unsecured nonpriority debts; (3) the reports have some characteristics in common, such as the use of debtor-prepared income, expense and debt schedules, the assumption that the debtor's income would remain stable over a 5-year repayment period, and the assumption that all debtors who entered a 5-year repayment plan would successfully complete the plans--an assumption that historical experience suggests is unlikely; (4) however, the reports have some methodological differences, including different: (a) groupings of the types of debts that could be repaid; (b) gross income thresholds used to identify those debtors whose repayment capacity was analyzed; (c) assumptions about debtors' allowable living expenses; (d) treatment of student loans that debtors had categorized as unsecured priority debts; and (e) assumptions about administrative expenses; and (5) these methodological differences contributed to the reports' different estimates of debtors' repayment capacity.
SBA’s Office of Disaster Assistance (ODA) responds to disasters and administers the Disaster Loan Program. A Presidential disaster declaration puts into motion long-term federal recovery programs, such as the Disaster Loan Program, but SBA is not a “first responder” after a disaster. Rather, local government emergency services assume that role with help from state and volunteer agencies. For catastrophic disasters, and if a governor requests it, federal resources can be mobilized through the U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA). SBA typically responds to a disaster within 3 days by sending ODA field staff to the affected area to begin providing public information about SBA’s services. Once a disaster is declared, SBA by law is authorized to make two types of disaster loans: (1) physical disaster loans, and (2) economic injury disaster loans. Physical disaster loans are for the permanent rebuilding and replacement of uninsured or underinsured disaster-damaged property, including personal residences and businesses of any size. That is, SBA provides loans to cover repair costs that FEMA or other insurance has not already fully compensated or covered. Economic injury disaster loans provide small businesses, including agricultural cooperatives and private nonprofit organizations, with necessary working capital until normal operations can resume. The Act comprises 26 provisions with substantive requirements for SBA; some with specific deadlines and some needing appropriations, and includes requirements that SBA must meet for disaster planning and response, disaster lending, and reporting. For instance, the Act includes provisions to improve SBA’s coordination with FEMA, require that the agency conduct biennial disaster simulations, create a comprehensive disaster response plan, and improve communication with the public when disaster assistance is made available. It includes requirements to improve ODA’s infrastructure, appoint an official to oversee the disaster planning and responsibilities of the agency, and establish reporting requirements for various reports to Congress. The Act also creates new programs, such as the Immediate Disaster Assistance Program that would provide small dollar loans immediately following a disaster and the Expedited Disaster Assistance Loan Program that would provide expedited disaster assistance to businesses. As of May 2010, SBA fully addressed requirements for 15 of 26 provisions of the Act; partially addressed 6; and took no action on 5 that are not applicable at this time (see fig. 1). For the 15 provisions SBA fully addressed, the agency’s actions included putting in place a secondary facility in Sacramento, California to process loans when the main facility in Fort Worth, Texas is unavailable, making improvements to DCMS to track and follow up with applicants, and expanding its disaster reserve staff from about 300 to more than 2,000 individuals. According to SBA and our review, 5 provisions require no action at this time because they are discretionary or additional appropriations are needed before SBA can satisfy the Act’s requirements. When we issued our report in July 2009, SBA had fully addressed 13 of 26 provisions. Since then, SBA has fully addressed two additional provisions. As we recommended and the Act requires, SBA issued an updated DRP. SBA also issued regulations on coordinating with FEMA to ensure that disaster assistance applications are submitted in a timely manner. In addition, SBA must revise the regulations annually and report on the revisions when submitting its annual report to Congress. The ments associated with the provision on FEMA coordination annual require will help SBA, FEMA, and Congress to determine whether the regulations are effective. SBA still has to take additional steps to completely address 6 provisions. For example, SBA officials told us that the agency has taken additional steps to address the marketing and outreach provision, including that they (1) began an ongoing dialogue with the SBDC state directors in the Gulf Coast about disseminating disaster planning and preparation informatio 2) in the five most hurricane-prone states before the hurricane season, ( detailed an SBA employee who works with the SBDCs to the Offic Entrepreneurial Development to help the agency develop a strategic approach for its disaster role, and (3) issued some public service announcements tailored to specific regions. In our 2009 report, we recommended that SBA should fulfill the region-specific marketing and outreach requirement, including making this information readily av to regional entities prior to the likely occurrence of a disaster. However, the steps recently taken by SBA have not been discussed in public documents or venues, such as in the DRP or on the SBA Web site, which would make the information more transparent and easily accessible to public and Congress. As we reported in 2009, we consistently heard fro regional entities, such as SBDCs and emergency management groups, e need for more up-front information on SBA’s Disaster Loan about th Program and their expected roles and responsibilities in disaster response efforts. According to SBA officials, the agency has not yet completely addressed some provisions because to do so, the agency would have to make extensive changes to current programs or implement new programs––suc as the Immediate and Expedited Disaster Assistance Programs––to satisf y requirements of the Act. These programs, which require participation of private lenders, would be designed to provide businesses with access to short-term loans while they are waiting for long-term assistance. A s we reported in 2009, SBA plans to conduct pilots of these programs before fully implementing them. SBA officials recently told us they have established a cross-functional work group jointly chaired by officia ODA and the Office of Capital Access to address these requirements and develop the pilots. ODA officials said they drafted regulations and received subsidy and administrative cost funding in the 2010 budget to allow them to pilot test about 600 loans under the Immediate Disaster Assistance Program (section 12084). Additionally, SBA officials told us that they performed initial outreach to lenders—such as those who have participated in their Gulf Opportunity Pilot Loan Program—to obtain their reaction to and interest in the programs. They believe such outreach wi help SBA identify and address any issues that may arise and determine the viability of the loa have the pilot for the Immediate Disaster Assistance Program in place by September 2010. n programs. SBA officials told us that their goal is to The Act establishes multiple new reporting requirements and while SBA has addressed most of these, the agency has not met some statutory deadlines. For example, as required by the Act and as we recommended, the agency issued its first annual report on disaster assistance in November 2009 but the report was due in November 2008. Specifically, Act requires that SBA report annually on the total number of SBA disa staff, major changes to the Disaster Loan Program (such as changes to technology or staff responsibilities), a description of the number and dollar amount of disaster loans made during the year, and SBA’s plans for preparing and responding to possible future disasters. In 2009, we reported that failure to produce annual reports on schedule can lead to a lack of transparency about the agency’s progress in reforming the program. The agency has had limited success in meeting nine additional ster provisions in the Act that have deadlines associated with them. The agency also has not developed a plan with expected time frames for addressing the remaining requirements. Not having an implementation plan in place for addressing the remaining requirements can lead to a lac of transparency about the agency’s Disaster Loan Program, capacity to reform the program and program improvements, as well as its ability to e adequately prepare for and respond to disasters. In our 2009 report, w recommended that SBA develop an implementation plan and include milestone dates for completing implementation and any major program, resource, or other challenges the agency faces as it continues efforts to address requirements of the Act. Recently, SBA officials told us th would provide a plan or report that addressing the Act’s requirements. SBA’s initial response following the 2008 Midwest floods and Hurricane Ike aligned with major components of its DRP, such as infrastructu human capital, information technology, and communications. For example, according to SBA, following both disasters the agency used its organizational infrastructure and key staff in each of its core functions to provide disaster assistance. ODA also utilized available operational and technological support, and communications and outreach, to help ensure that the agency would be able to provide timely financial assistance to the disaster victims. Additionally, individuals affected by both disasters with what whom we spoke considered the agency’s overall performance some positive, but believed the disaster loan process could be improved. In May 2008, floods devastated 85 counties in Iowa (one of several states n affected) and in September 2008, Hurricane Ike devastated 50 counties i Texas. SBA and SBDC officials, state and local representatives, private- entity officials, and business owners in Iowa and Texas told us that in days immediately following the disasters, ODA staff reported to the affected areas and began providing needed disaster assistance. Th individuals also said that SBA staff provided outreach and public information about the Disaster Loan Program; distributed application information; assigned knowledgeable customer service representatives to various Disaster and Business Recovery Centers; and helped applicants by ese answering questions, providing guidance, and offering one-on-one help–– as outlined in SBA’s DRP. In addition, our review of SBA’s 2008 Disaste r Loan Program Customer Satisfaction Survey also showed that respond were somewhat satisfied with the assistance SBA provided during othe recent disasters. However, both the individuals we interviewed and survey results indicated areas for improvement and opportunities to increase satisfaction. For example, individuals we interviewed and survey responses pointed to concerns about the amount of paperwork required to complete SBA’s disaster loan application and the timeliness of loan disbursements. Also, some business owners said they had to provide copies of 3 years of feder income tax returns, although they had signed an Internal Revenue Service (IRS) form 8821—Tax Information Authorization—which allows SBA to get tax return information directly from IRS. To address these concerns, the individuals we interviewed suggested several changes to the program, such as eliminating the requirement that business loan applicants provide copies of IRS tax records; providing partial disbursements earlier in the process; using bridge loans to help ensure disaster victims receive timely assistance; and involving SBA, SBDCs, and state and local officials in joint pre-planning and disaster preparedness efforts. Although SBA officials told us they have been improving the application ion, process, they had not documented the improvement efforts. In addit we found that while SBA conducts an annual customer satisfac the agency does not appear to incorporate this feedback mechanism int its formal efforts to continually improve the application process. Furthermore, SBA does not appear to have a formal process for addressing identified problem areas and using the information gained to improve the experience of future applicants. By establishing such a process to address identified problem areas, SBA could better demonstrate its commitment to improving the Disaster Loan Prog Because the agency has missed opportunities to further improve its Disaster Loan Program, and in particular improve the application p for future applicants, we recommended in our July 2009 report that SBA develop and implement a process to address identified problems in the disaster loan application process. In response to our recommendation, SBA cited ongoing efforts since 2005, such as the electronic loan application, and said the agency has plans to continue its improvement efforts and make them an ongoing priority. However, SBA has not provided information to us on how it would implement a formal process to address identified problem areas in the disaster loan application process. ram. rocess As you know, we have reported on a variety of issues related to the federal government’s response to the 2005 Gulf Coast hurricanes. As part of this committee’s efforts to assess the level and success of federal efforts to help Gulf Coast small businesses recover from the 2005 hurricanes, we are conducting work at your request that focuses on small business recovery efforts in four states impacted by Hurricanes Katrina and Rita: Alabama, Louisiana, Mississippi, and Texas. This summer, we will report to this committee on: (1) assistance small businesses in the Gulf Coast received from the SBA, the Department of Housing and Urban Development, and the Economic Development Administration; (2) federal contract funds received by small businesses; and (3) the small business economy in the Gulf Coast region. Madam Chair, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Committee may have. For further information on this testimony, please contact William B. Shear at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Kay Kuhlman, Assistant Director; Beth Faraguna, Alexandra Martin-Arseneau, Marc Molino, Linda Rego and Barbara Roesmann. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
After the Small Business Administration (SBA) was widely criticized for its performance following the 2005 Gulf Coast hurricanes, the agency took steps to reform its Disaster Loan Program. Congress also enacted the Small Business Disaster Response and Loan Improvements Act of 2008 (Act), which places new requirements on SBA to ensure it is prepared for catastrophic disasters. This testimony discusses (1) the extent to which SBA has addressed the Act's requirements, and (2) how SBA's response to major disasters in 2008 aligned with key components of its June 2007 Disaster Recovery Plan (DRP). In completing this statement, GAO reviewed and updated, as appropriate, the July 2009 report, Small Business Administration: Additional Steps Should Be Taken to Address Reforms to the Disaster Loan Program and Improve the Application Process for Future Disasters (GAO-09-755). In that report, GAO recommended that SBA should fulfill the Act's region-specific marketing and outreach requirements; complete its annual report to Congress; issue an updated DRP; develop an implementation plan for remaining requirements; and develop procedures to further improve the application process for the Disaster Loan Program. SBA has made some progress since GAO's July 2009 report in addressing provisions of the Act and continued attention to certain provisions will be important for sustained progress. As of May 2010, SBA met requirements for 15 of 26 provisions of the Act and partially addressed 6. Five provisions do not require any action at this time. Since July 2009 SBA has taken a number of actions. For example, SBA issued an updated DRP in November 2009. In addition, SBA issued regulations on coordinating with the Federal Emergency Management Agency on timely submission of disaster assistance applications. SBA also has taken steps to address the Act's requirements for region-specific marketing and outreach. For example, SBA has begun a dialogue with the Small Business Development Center state directors in the Gulf Coast about disseminating disaster planning information in the five most hurricane-prone states before the hurricane season. However, these steps have not been discussed in public documents or venues, such as in the DRP or on the SBA Web site, which would make the information more transparent and easily accessible to the public and Congress. SBA officials told GAO the agency has not yet completely addressed some provisions because the agency must make extensive changes to current programs or implement new programs. In particular, for two requirements that will involve private lenders, SBA plans to implement pilots before finalizing regulations. SBA officials recently said that they had formed a cross-functional work group and began reaching out to lenders about the planned pilots. SBA has not yet developed an implementation plan with milestone dates for addressing the remaining requirements, but recently said it would provide a plan or report that included milestone dates for addressing the Act's requirements. SBA's initial response after the 2008 Midwest floods and Hurricane Ike aligned with certain components of its initial DRP, such as using technology and outreach efforts to better ensure timely assistance. The individuals GAO interviewed and results from SBA's 2008 Disaster Loan Program Customer Satisfaction Survey provided somewhat positive feedback about SBA's performance following the disasters. However, interviewees and survey results indicated areas for improvement; in particular, both indicated that application paperwork was burdensome and that the application process needed improvement. The agency did not appear to have a formal process for identifying problems in the application process and making needed improvements. SBA officials told GAO that they have been taking steps to improve the application process. However, SBA has not provided information to GAO on how it would implement a formal process to address identified problem areas in the disaster loan application process.
In August 1990, Iraq invaded Kuwait, and the United Nations imposed sanctions against Iraq. Security Council resolution 661 of 1990 prohibited all nations from buying and selling Iraqi commodities, except for food and medicine. Security Council resolution 661 also prohibited all nations from exporting weapons or military equipment to Iraq and established a sanctions committee to monitor compliance and progress in implementing the sanctions. The members of the sanctions committee were members of the Security Council. Subsequent Security Council resolutions specifically prohibited nations from exporting to Iraq items that could be used to build chemical, biological, or nuclear weapons. In 1991, the Security Council offered to let Iraq sell oil under a U.N. program to meet its peoples’ basic needs. The Iraqi government rejected the offer, and over the next 5 years, the United Nations reported food shortages and a general deterioration in social services. In December 1996, the United Nations and Iraq agreed on the Oil for Food program, which permitted Iraq to sell up to $1 billion worth of oil every 90 days to pay for food, medicine, and humanitarian goods. Subsequent U.N. resolutions increased the amount of oil that could be sold and expanded the humanitarian goods that could be imported. In 1999, the Security Council removed all restrictions on the amount of oil Iraq could sell to purchase civilian goods. The United Nations and the Security Council monitored and screened contracts that the Iraqi government signed with commodity suppliers and oil purchasers, and Iraq’s oil revenue was placed in a U.N.-controlled escrow account. In May 2003, U.N. resolution 1483 requested the U.N. Secretary General to transfer the Oil for Food program to the CPA by November 2003. (Appendix I contains a detailed chronology of Oil for Food program and sanctions events.) The United Nations allocated 59 percent of the oil revenue for the 15 central and southern governorates, which were controlled by the central government; 13 percent for the 3 northern Kurdish governorates; 25 percent for a war reparations fund for victims of the Iraq invasion of Kuwait in 1990; and 3 percent for U.N. administrative costs, including the costs of weapons inspectors. From 1997 to 2003, the Oil for Food program was responsible for more than $67 billion of Iraq's oil revenue. Through a large portion of this revenue, the United Nations provided food, medicine, and services to 24 million people and helped the Iraqi government supply goods to 24 economic sectors. Despite concerns that sanctions may have worsened the humanitarian situation, the Oil for Food program appears to have helped the Iraqi people. According to the United Nations, the average daily food intake increased from around 1,275 calories per person per day in 1996 to about 2,229 calories at the end of 2001. Malnutrition rates for children under 5 fell by more than half. In February 2002, the United Nations reported that the Oil for Food program had considerable success in several sectors such as agriculture, food, health, and nutrition by arresting the decline in living conditions and improving the nutritional status of the average Iraqi citizen. Since 1997, Iraq has imported almost 2.7 million metric tons of wheat annually. During the 1980s, Australia was Iraq’s primary wheat supplier with 38 percent of the market, and the United States was the second major supplier at 29 percent. By 1989, Iraq was the twelfth largest market for U.S. agricultural exports, including rice. Since 1997, however, Australia has dominated Iraq’s Oil for Food wheat trade with a 73 percent market share, and Vietnam has become a major supplier of rice to Iraq. The U.S. market share for wheat dropped to 6 percent during that time. U.S. wheat exports during the sanctions only occurred in 1997 and 1998. We estimate that, from 1997 through 2002, the former Iraqi regime acquired $10.1 billion in illegal revenues—$5.7 billion through oil smuggled out of Iraq and $4.4 billion through surcharges against oil sales and illicit commissions from commodity suppliers. This estimate is higher than the $6.6 billion in illegal revenues we reported in May 2002. We updated our estimate to include (1) oil revenue and contract amounts for 2002, (2) updated letters of credit from prior years, and (3) newer estimates of illicit commissions from commodity suppliers. Appendix II describes our methodology for determining illegal revenues gained by the former Iraqi regime. Oil was smuggled out through several routes, according to U.S. government officials and oil industry experts. Oil entered Syria by pipeline, crossed the borders of Jordan and Turkey by truck, and was smuggled through the Persian Gulf by ship. Jordan maintained trade protocols with Iraq that allowed it to purchase heavily discounted oil in exchange for up to $300 million in Jordanian goods. Syria received up to 200,000 barrels of Iraqi oil a day in violation of the sanctions. Oil smuggling also occurred through Turkey and Iran. In addition to revenues from oil smuggling, the Iraqi government levied surcharges against oil purchasers and commissions against commodity suppliers participating in the Oil for Food program. According to some Security Council members, the surcharge was up to 50 cents per barrel of oil and the commission was 5 to 15 percent of the commodity contract. In our 2002 report, we estimated that the Iraqi regime received a 5-percent illicit commission on commodity contracts. However, a September 2003 Department of Defense review found that at least 48 percent of 759 Oil for Food contracts that it reviewed were potentially overpriced by an average of 21 percent. Food commodity contracts were the most consistently overpriced, with potential overpricing identified in 87 percent of the contracts by an average of 22 percent. The review also found that the use of middlemen companies potentially increased contract prices by 20 percent or more. Defense officials found 5 contracts that included “after- sales service charges” of between 10 and 20 percent. In addition, interviews by U.S. investigators with high-ranking Iraqi regime officials, including the former oil and finance ministers, confirmed that the former regime received a 10-percent commission from commodity suppliers. According to the former oil minister, the regime instituted a fixed 10-percent commission in early 2001 to address a prior “compliance” problem with junior officials. These junior officials had been reporting lower commissions than what they had negotiated with suppliers and pocketing the difference. Both OIP, as an office within the U.N. Secretariat, and the Security Council’s sanctions committee were responsible for overseeing the Oil for Food Program. However, the Iraqi government negotiated contracts directly with purchasers of Iraqi oil and suppliers of commodities. While OIP was to examine each contract for price and value, it is unclear how it performed this function. The sanctions committee was responsible for monitoring oil smuggling, screening contracts for items that could have military uses, and approving oil and commodity contracts. The sanctions committee responded to illegal surcharges on oil purchases, but it is unclear what actions it took to respond to commissions on commodity contracts. Ongoing investigations of the Oil for Food program may wish to consider further examining how the structure of the program enabled the Iraqi government to obtain illegal revenues, the role of member states in monitoring and enforcing the sanctions, actions taken to reduce oil smuggling, and the responsibilities and procedures for assessing price reasonableness in commodity contracts. U.N. Security Council resolutions and procedures recognized the sovereignty of Iraq and gave the Iraqi government authority to negotiate contracts and decide on contractors. Security Council resolution 986 of 1995 authorized states to import petroleum products from Iraq, subject to the Iraqi government’s endorsement of transactions. Resolution 986 also stated that each export of goods would be at the request of the government of Iraq. Security Council procedures for implementing resolution 986 further stated that the Iraqi government or the United Nations Inter-Agency Humanitarian Program would contract directly with suppliers and conclude the appropriate contractual arrangements. Iraqi control over contract negotiations was an important factor in allowing Iraq to levy illegal surcharges and commissions. When the United Nations first proposed the Oil for Food program in 1991, it recognized this vulnerability. At that time, the Secretary General proposed that the United Nations, an independent agent, or the government of Iraq be given the responsibility to negotiate contracts with oil purchasers and commodity suppliers. The Secretary General concluded that it would be highly unusual or impractical for the United Nations or an independent agent to trade Iraq’s oil or purchase commodities. He recommended that Iraq negotiate the contracts and select the contractors. However, he stated that the United Nations and Security Council would have to ensure that Iraq’s contracting did not circumvent the sanctions and was not fraudulent. The Security Council further proposed that U.N. agents review contracts and compliance at Iraq’s oil ministry, but Iraq refused these conditions. OIP administered the Oil for Food program from December 1996 to November 2003. Under Security Council resolution 986 of 1995 and a memorandum of understanding between the United Nations and the Iraqi government, OIP monitored the sale of Iraq’s oil, monitoring Iraq’s purchase of commodities and the delivery of goods, and accounting for the program’s finances. The United Nations received 3 percent of Iraq’s oil export proceeds for its administrative and operational costs, which included the cost of U.N. weapons inspections. The sanctions committee’s procedures for implementing resolution 986 stated that independent U.N. inspection agents were responsible for monitoring the quality and quantity of the oil shipped. The agents were authorized to stop shipments if they found irregularities. OIP hired a private firm to monitor Iraqi oil sales at exit points. However, the monitoring measures contained weaknesses. According to U.N. reports and a statement from the monitoring firm, the major offshore terminal at Mina al-Basra did not have a meter to measure the oil pumped nor could onshore storage capacity be measured. Therefore, the U.N. monitors could not confirm the volume of oil loaded onto vessels. Also, in 2001, the oil tanker Essex took a large quantity of unauthorized oil from the platform when the monitors were off duty. In December 2001, the Security Council required OIP to improve the monitoring at the offshore terminal. As part of its strategy to repair Iraq’s oil infrastructure, the CPA plans to install reliable metering at Mina al-Basra and other terminals, but no contracts have been let. OIP also was responsible for monitoring Iraq’s purchase of commodities and the delivery of goods. Security Council resolution 986, paragraph 8a(ii) required Iraq to submit a plan, approved by the Secretary General, to ensure equitable distribution of Iraq’s commodity purchases. The initial distribution plans focused on food and medicines while subsequent plans were expansive and covered 24 economic sectors, including electricity, oil, and telecommunications. The sanctions committee’s procedures for implementing Security Council resolution 986 stated that experts in the Secretariat were to examine each proposed Iraqi commodity contract, in particular the details of price and value, and to determine whether the contract items were on the distribution plan. OIP officials told the Defense Contract Audit Agency they performed very limited, if any, pricing review. They stated that no U.N. resolution tasked them with assessing the price reasonableness of the contracts and no contracts were rejected solely on the basis of price. However, OIP officials stated that, in a number of instances, they reported to the sanctions committee that commodity prices appeared high, but the committee did not cite pricing as a reason to place holds on the contracts. For example, in October 2001, OIP experts reported to the sanctions committee that the prices in a proposed contract between Iraq and the Al- Wasel and Babel Trading Company appeared high. However, the sanctions committee reviewed the data and approved the contract. Subsequently, the Treasury Department identified this company as a front company for the former regime in April 2004. The United Nations also required all countries to freeze the assets of this company and transfer them to the Development Fund for Iraq in accordance with Security Council resolution 1483. The sanctions committee’s procedures for implementing resolution 986 stated that independent inspection agents will confirm the arrival of supplies in Iraq. OIP deployed about 78 U.N. contract monitors to verify shipments and authenticate the supplies for payment. OIP employees were able to visually inspect 7 to 10 percent of the approved deliveries. Security Council resolution 986 also requested the Secretary General to establish an escrow account for the Oil for Food Program and to appoint independent and certified public accountants to audit the account. The Secretary General established an escrow account at BNP Paribas for the deposit of Iraqi oil revenues and the issue of letters of credit to suppliers with approved contracts. The U.N. Board of Audit, a body of external public auditors, audited the account. The external audits focused on management issues related to the Oil for Food program and the financial condition of the Iraq account. U.N. auditors generally concluded that the Iraq account was fairly presented in accordance with U.N. financial standards. The reports stated that OIP was generally responsive to external audit recommendations. The external audits determined that oil prices were mostly in accordance with the fair market value of oil products to be shipped and checked to confirm that pricing was properly and consistently applied. They also determined that humanitarian and essential services supplies procured with oil funds generally met contract terms with some exceptions. U.N. external audit reports contained no findings of fraud during the program. The U.N. Office of Internal Oversight Services (OIOS) conducted internal audits of the Oil for Food program and reported the results to OIP’s executive director. OIOS officials stated that they have completed 55 audits and have 4 ongoing audits of the Oil for Food program. Overall, OIOS reported that OIP had made satisfactory progress in implementing most of its recommendations. We did not have access to individual OIOS audit reports except for an April 2003 report made publicly available in May 2004 that assessed the activities of the company contracted by the United Nations to authenticate goods coming into Iraq. It found that the contractor did not perform all required duties and did not adequately monitor goods coming into the northern areas of Iraq. We also reviewed 7 brief summaries of OIOS reports covering the Oil for Food program from July 1, 1996, through June 30, 2003. These summaries identified a variety of operational concerns involving procurement, inflated pricing and inventory controls, coordination, monitoring, and oversight. In one case, OIOS cited purchase prices for winter items for displaced persons in northern Iraq that were on average 61 percent higher than local vendor quotes obtained by OIOS. In another case, an OIOS review found that there was only limited coordination of program planning and insufficient review and independent assessment of project implementation activities. The sanctions committee was responsible for three key elements of the Oil for Food Program: (1) monitoring implementation of the sanctions, (2) screening contracts to prevent the purchase of items that could have military uses, and (3) approving Iraq’s oil and commodity contracts. U.N. Security Council resolution 661 of 1990 directed all states to prevent Iraq from exporting products, including petroleum, into their territories. Paragraph 6 of resolution 661 established a sanctions committee to report to the Security Council on states’ compliance with the sanctions and to recommend actions regarding effective implementation. As early as June 1996, the Maritime Interception Force, a naval force of coalition partners including the United States and Great Britain, informed the sanctions committee that oil was being smuggled out of Iraq through Iranian territorial waters. In December 1996, Iran acknowledged the smuggling and reported that it had taken action. In October 1997, the sanctions committee was again informed about smuggling through Iranian waters. According to multiple sources, oil smuggling also occurred through Jordan, Turkey, Syria, and the Gulf. Smuggling was a major source of illicit revenue for the former Iraqi regime through 2002. A primary function of the members of the sanctions committee was to review and approve contracts for items that could be used for military purposes. The United States conducted the most thorough review; about 60 U.S. government technical experts assessed each item in a contract to determine its potential military application. According to U.N. Secretariat data in 2002, the United States was responsible for about 90 percent of the holds placed on goods to be exported to Iraq. As of April 2002, about $5.1 billion worth of goods were being held for shipment to Iraq. According to OIP, no contracts were held solely on the basis of price. Under Security Council resolution 986 of 1995 and its implementing procedures, the sanctions committee was responsible for approving Iraq’s oil contracts, particularly to ensure that the contract price was fair, and for approving Iraq’s commodity contracts. The U.N.’s oil overseers reported in November 2000 that the oil prices proposed by Iraq appeared low and did not reflect the fair market value. According to a senior OIP official, the independent oil overseers also reported in December 2000 that purchasers of Iraqi oil had been asked to pay surcharges. In March 2001, the United States informed the sanctions committee about allegations that Iraqi government officials were receiving illegal surcharges on oil contracts and illicit commissions on commodity contracts. The sanctions committee attempted to address these allegations by implementing retroactive pricing for oil contracts in 2001. It is unclear what actions the sanctions committee took to respond to illicit commissions on commodity contracts. Due to increasing concern about the humanitarian situation in Iraq and pressure to expedite the review process, the Security Council passed resolution 1284 in December 1999 to direct the sanctions committee to accelerate the review process. Under fast-track procedures, the sanctions committee allowed OIP to approve contracts for food, medical supplies, and agricultural equipment (beginning in March 2000), water treatment and sanitation (August 2000), housing (February 2001), and electricity supplies (May 2001). Several investigations into the Oil for Food program are planned or under way. A U.N. inquiry officially began on April 21, 2004, with a Security Council resolution supporting the inquiry and the appointment of three high-level officials to oversee the investigation. This investigation will examine allegations of corruption and misconduct within the United Nations Oil for Food program and its overall management of the humanitarian program. In addition, Iraq’s Board of Supreme Audit contracted with the accounting firm Ernst and Young to conduct an investigation of the program. Several U.S. congressional committees have also begun inquiries into U.N. management of the Oil for Food program and U.S. oversight through its role on the sanctions committee. These investigations of the Oil for Food program provide an opportunity to better quantify the extent of corruption, determine the adequacy of internal controls, and identify ways to improve future humanitarian assistance programs conducted within an economic sanctions framework. Based on our work, we have identified several questions that should be addressed: How did the size and structure of the Oil for Food program enable the Iraqi government to obtain illegal revenues through illicit surcharges and commissions? What was the role of U.N. member states in monitoring and enforcing the sanctions? What were the criteria used to certify national purchasers of oil and suppliers of commodities? What actions, if any, were taken to reduce the smuggling of Iraqi oil? What precluded the sanctions committee from taking action? Who assessed the reasonableness of prices for commodity contracts negotiated between the Iraqi government and suppliers and what actions were taken? How were prices for commodities assessed for reasonableness under fast-track procedures? Much of the information on surcharges on oil sales and illicit commissions on commodity contracts is with the Iraqi ministries in Baghdad and national purchasers and suppliers. We did not have access to this data to verify the various allegations of corruption associated with these transactions. Subsequent investigations of the Oil for Food program should include a statistical sampling of these transactions to more accurately document the extent of corruption and the identities of companies and countries that engaged in illicit transactions. This information would provide a basis for restoring those assets to the Iraqi government. Subsequent evaluations and audits should also consider an analysis of the lessons learned from the Oil for Food program and how future humanitarian programs of this nature should be structured to ensure that funds are spent on intended beneficiaries and projects. For example, analysts may wish to review the codes of conduct developed for the CPA’s Oil for Food coordination center and suppliers. In addition, U.N. specialized agencies implemented the program in the northern governorates while the program in central and southern Iraq was run by the central government in Baghdad. A comparison of these two approaches could provide insight on the extent to which the operations were transparent and the program delivered goods and services to the Iraqi people. Evolving policy and implementation decisions on the food distribution system and the worsening security situation have affected the movement of food commodities within Iraq. As a result, warehouse stocks are low, and Iraq has less than a month’s supply of several food items, including staple grains, and no buffer stock. The food distribution system created a dependency on food subsidies that disrupted private food markets. The government will have to decide whether to continue, reform, or eliminate the current system. In addition, inadequate oversight and corruption in the Oil for Food program raise concerns about the Iraqi government’s ability to manage the food distribution system and absorb donor reconstruction funds under existing structures. The CPA has taken steps, such as appointing inspectors general, to strengthen accountability measures in Iraq’s ministries. The CPA’s failed plans to privatize the food ration system and delayed negotiations with WFP on food procurement and distribution resulted in diminished stocks of food commodities and localized shortages in early 2004. The CPA administrator discussed eliminating Iraq’s food distribution system and providing recipients with cash payments based on plans submitted to the CPA in summer 2003 that asserted that the system was expensive and depressed the agricultural sector. As a result, the Ministry of Trade began drawing down existing inventories of food. In December 2003, as the security environment worsened, the administrator decided not to reform the ration system. In January 2004, the CPA negotiated a memorandum of understanding (MOU) with WFP and the Ministry of Trade that committed WFP to procuring a 3-month buffer food stock by March 31, 2004, and assuming the delivery of food to hub warehouses inside Iraq through June 2004. The MOU was delayed due to disagreements about emergency food procurement, contract terms, and the terms of WFP’s involvement. No additional food was procured during the negotiations, and food stocks diminished and localized shortages occurred in early 2004. WFP completed its buffer stock procurement by March 31, 2004. The Ministry of Trade assumed responsibility for food procurement on April 1, 2004, and will implement the distribution system after June 30, 2004. A U.S. official stated in early March 2004 that coordination between WFP and the Ministry of Trade had been deteriorating. The Ministry had not provided WFP with complete and timely information on monthly food allocation plans, weekly stock reports, or information on cargo arrivals, as the MOU required. WFP staff reported that the Ministry’s data were subject to sudden, large, and unexplained stock adjustments, thereby making it difficult to plan deliveries. A State Department official noted in April 2004 that coordination between WFP and the Ministry was improving. However, according to early June 2004 discussions with other U.S. officials, these coordination problems are continuing. The security environment in Iraq has affected the movement of Oil for Food goods since the fall of 2003. A September 2003 U.N. report found that the evacuation of U.N. personnel from Baghdad, following the bombing of the U.N. office in August 2003, affected the timetable and procedures for the transfer of the Oil for Food program to the CPA and contributed to delays in prioritizing and renegotiating contracts. The August bombing of the U.N. office also resulted in the temporary suspension of the border inspection process and shipments of humanitarian supplies and equipment. A March 2004 CPA report noted that stability of the food supply would be affected if security conditions worsened. According to an Oil for Food coordination center official, the worsening security situation during April 2004 affected food supplies. As of early June, major food transport corridors from Jordan and the port of Umm Qasr are restricted due to security concerns, and border crossings from Jordan, Syria, and Turkey are congested. Also, fewer drivers are willing to work in this environment, thereby reducing the movement of food from the borders and ports to the food warehouses. This situation is exacerbated by congestion at the major port of Umm Qasr, which is operating at 50 percent of its capacity due to inadequate fuel and power supply, off-loading delays, dredging activity, inadequate storage capacity, and security concerns. Initial planning and management problems, combined with security and port congestion issues limiting the movement of food, have resulted in the drawing down of warehouse food stocks. The food supply situation was described as tenuous by several U.S. and WFP sources in early June. At that time, Iraq had less than a 1-month food supply for several items in the food basket, including grains. About 360,000 metric tons of the 1.6 million metric tons procured for the buffer stock had arrived as of June 10, but the full amount will not be delivered until September, according to a WFP official. Moreover, these commodities are not being reserved as a buffer stock, but are immediately used as operating stocks. U.S. officials are concerned that, as the Iraqi government assumes full responsibility for food distribution on July 1, 2004, it will find it difficult to manage the food distribution system given low food supplies. According to U.S. and WFP officials, the Ministry of Trade implemented the food distribution system during the Oil for Food program under more favorable conditions. For example, the Ministry was able to maintain at least a 6- month food inventory and operate in a more secure environment. The Oil for Food program facilitated the operation of the Public Distribution System run by Iraq’s Ministry of Trade. Under this system, each Iraqi is eligible to receive a monthly “food basket” that normally consists of a dozen items. After the CPA transfers responsibility for the food distribution system to the Iraqi provisional government in July 2004, the government will have to decide whether to continue, reform, or eliminate the current system. Documents from the Ministries of Finance and Planning indicate that the annual cost of maintaining the system is as high as $5 billion, or about 25 percent of total government expenditures. In 2005 and 2006, expenditures for food will be almost as much as all expenditures for capital projects. According to a September 2003 joint U.N. and World Bank needs assessment of Iraq, the food subsidy, given out as a monthly ration to the entire population, staved off mass starvation during the time of the sanctions, but disrupted the market for food grains produced locally. The agricultural sector had little incentive to produce crops in the absence of a promising market. However, the Iraqi government may find it politically difficult to scale back the food distribution system with an estimated 60 percent of the population relying on monthly rations as their primary source of nutrition. WFP is completing a vulnerability assessment that Iraq could use to make future decisions on food security programs and better target food items to those most in need. WFP’s preliminary assessment results found that 10 percent of the population was extremely poor and needed food aid in addition to the Public Distribution System. WFP is also developing an emergency operation plan to meet the needs of vulnerable populations. In addition, in April 2004, a USAID contractor submitted a strategy for a short-term plan to stabilize the agricultural sector by providing agricultural supplies, re-establishing domestic wheat markets, rehabilitating irrigation systems, and rehabilitating Ministry of Agriculture facilities. The strategy also includes a medium-term plan to create appropriate agricultural policies, provide capacity building for market-led agriculture, and strengthen the agricultural sector through national programs. In the absence of significant reforms, the history of inadequate oversight and corruption in the Oil for Food program raises questions about the Iraqi government’s ability to manage the import and distribution of food commodities and the billions in international assistance expected to flow into the country. The CPA and Iraqi ministries must address corruption to help ensure that the food distribution system is managed with transparent and accountable controls. Building these internal control and accountability measures into the operations of Iraqi ministries will also help safeguard the $18.4 billion in fiscal year 2004 U.S. reconstruction funds and $13.8 billion pledged by other countries. To address these concerns and oversee government operations, the CPA administrator appointed inspectors general for Iraq’s 26 national ministries. At the same time, the CPA announced the establishment of two independent agencies to work with the inspectors general—the Commission on Public Integrity and a Board of Supreme Audit. Finally, the United States will spend about $1.63 billion on governance-related activities in Iraq, which will include building an effective financial management system in Iraq’s ministries. The CPA’s coordination center continues to provide on-the-job training for ministry staff who will assume responsibility for food contracts after July 2004. Coalition personnel have provided Iraqi staff with guidance on working with suppliers in a fair and open manner and determining when changes to letters of credit are appropriate. In addition, according to center staff, coalition and Iraqi staff signed a code of conduct, which outlined proper job behavior. Among other provisions, the code of conduct prohibited kickbacks and secret commissions from suppliers. The center also developed a code of conduct for suppliers. In addition, the center has begun implementing the steps needed for the transition of full authority to the Iraqi ministries. These steps include transferring contract- related documents, contacting suppliers, and providing authority to amend contracts. In addition, the January 2004 MOU agreement commits WFP to training ministry staff in procurement and transport functions through June 30, 2004. Ten ministry staff are being trained at WFP headquarters in Rome, Italy. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please call Joseph Christoff at (202) 512-8979. Other key contributors to this statement were Pamela Briggs, Mark Connelly, Lynn Cothern, Zina Merritt, Tetsuo Miyabara, Valerie Nowak, Stephanie Robinson, Jonathan Rose, Richard Seldin, Audrey Solis, Roger Stoltz, and Phillip Thomas. We used the following methodology to estimate the former Iraqi regime’s illicit revenues from oil smuggling, surcharges on oil, and commissions from commodity contracts from 1997 through 2002: To estimate the amount of oil the Iraqi regime smuggled, we used Energy Information Administration (EIA) estimates of Iraqi oil production and subtracted oil sold under the Oil for Food program and domestic consumption. The remaining oil was smuggled through Turkey, the Persian Gulf, Jordan, and Syria (oil smuggling to Syria began late 2000). We estimated the amount of oil to each destination based on information from and discussions with officials of EIA, Cambridge Energy Research Associates, the Middle East Economic Survey, and the private consulting firm Petroleum Finance. We used the price of oil sold to estimate the proceeds from smuggled oil. We discounted the price by 9 percent for the difference in quality. We discounted this price by 67 percent for smuggling to Jordan and by 33 percent for smuggling through Turkey, the Persian Gulf, and Syria. According to oil industry experts, this is representative of the prices paid for smuggled oil. To estimate the amount Iraq earned from surcharges on oil, we multiplied the barrels of oil sold under the Oil for Food program from 1997 through 2002 by 25 cents per barrel. According to Security Council members, the surcharge varied, but Iraq tried to get as much as 50 cents per barrel. Industry experts also stated the surcharge varied. To estimate the commission from commodities, we multiplied Iraq’s letters of credit for commodity purchases by 5 percent for 1997 through 1998 and 10 percent for 1999 through 2002. According to Security Council members, the commission varied from 5 percent to 10 percent. This percentage was also confirmed in interviews conducted by U.S. officials with former Iraqi regime ministers of oil, finance, and trade and with Saddam Hussein’s presidential advisors. GAO did not obtain source documents and records from the former regime about its smuggling, surcharges, and commissions. Our estimate of illicit revenues is therefore not a precise accounting number. Areas of uncertainty in our estimate include: GAO’s estimate of the revenue from smuggled oil is less than the estimates of U.S. intelligence agencies. We used estimates of Iraqi oil production and domestic consumption for our calculations. U.S. intelligence agencies used other methods to estimate smuggling. GAO’s estimate of revenue from oil surcharges is based on a surcharge of 25 cents per barrel from 1997 through 2002. However, the average surcharge could be lower. U.N. Security Council members and oil industry sources do not know when the surcharge began or ended or the precise amount of the surcharge. One oil industry expert stated that the surcharge was imposed at the beginning of the program but that the amount varied. Security Council members and the U.S. Treasury Department reported that surcharges ranged from 10 cents to 50 cents per barrel. As a test of reasonableness, GAO compared the price paid for oil under the Oil for Food program with a proxy oil price for the period 1997 through 2002. We found that for the entire period, the price of Iraqi oil was considerably below the proxy price. Oil purchasers would have to pay below market price to have a margin to pay the surcharge. GAO’s estimate of the commission on commodities could be understated. We calculated commissions based on the commodity contracts for the 15 governorates in central and southern Iraq (known as the “59-percent account” because these governorates received this percentage of Oil for Food revenues). We excluded contracts for the three northern governorates (known as the “13-percent account”). However, the former Iraqi regime negotiated the food and medical contracts for the northern governorates, and the Defense Contract Audit Agency found that some of these contracts were potentially overpriced. The Defense Contract Audit Agency also found extra fees of between 10 and 20 percent on some contracts. Iraqi forces invaded Kuwait. Resolution 660 condemned the invasion and demands immediate withdrawal from Kuwait. Imposed economic sanctions against the Republic of Iraq. The resolution called for member states to prevent all commodity imports from Iraq and exports to Iraq, with the exception of supplies intended strictly for medical purposes and, in humanitarian circumstances, foodstuffs. President Bush ordered the deployment of thousands of U.S. forces to Saudi Arabia. Public Law 101-513, § 586C, prohibited the import of products from Iraq into the United States and the export of U.S. products to Iraq. Iraq War Powers Resolution authorized the president to use “all necessary means” to compel Iraq to withdraw military forces from Kuwait. Operation Desert Storm was launched: coalition operation was targeted to force Iraq to withdraw from Kuwait. Iraq announced acceptance of all relevant U.N. Security Council resolutions. U.N. Security Council Resolution 687 (Cease-Fire Resolution) Mandated that Iraq must respect the sovereignty of Kuwait and declare and destroy all ballistic missiles with a range of more than 150 kilometers as well as all weapons of mass destruction and production facilities. The U.N. Special Commission (UNSCOM) was charged with monitoring Iraqi disarmament as mandated by U.N. resolutions and to assist the International Atomic Energy Agency in nuclear monitoring efforts. Proposed the creation of an Oil for Food program and authorized an escrow account to be established by the Secretary General. Iraq rejected the terms of this resolution. Second attempt to create an Oil for Food program. Iraq rejected the terms of this resolution. Authorized transferring money produced by any Iraqi oil transaction on or after August 6, 1990, which had been deposited into the escrow account, to the states or accounts concerned as long as the oil exports took place or until sanctions were lifted. Allowed Iraq to sell $1 billion worth of oil every 90 days. Proceeds were to be used to procure foodstuffs, medicine, and material and supplies for essential civilian needs. Resolution 986 was supplemented by several U.N. resolutions over the next 7 years that extended the Oil for Food program for different periods of time and increased the amount of exported oil and imported humanitarian goods. Established the export and import monitoring system for Iraq. Signed a memorandum of understanding allowing Iraq’s export of oil to pay for food, medicine, and essential civilian supplies. Based on information provided by the Multinational Interception Force (MIF), communicated concerns about alleged smuggling of Iraqi petroleum products through Iranian territorial waters in violation of resolution 661 to the Security Council sanctions committee. Committee members asked the United States for more factual information about smuggling allegations, including the final destination and the nationality of the vessels involved. Provided briefing on the Iraqi oil smuggling allegations to the sanctions committee. Acknowledged that some vessels carrying illegal goods and oil to and from Iraq had been using the Iranian flag and territorial waters without authorization and that Iranian authorities had confiscated forged documents and manifests. Representative agreed to provide the results of the investigations to the sanctions committee once they were available. Phase I of the Oil for Food program began. Extended the term of resolution 986 another 180 days (phase II). Authorized special provision to allow Iraq to sell petroleum in a more favorable time frame. Brought the issue of Iraqi smuggling petroleum products through Iranian territorial waters to the attention of the U.N. Security Council sanctions committee. Coordinator of the Multinational Interception Force (MIF) Reported to the U.N. Security Council sanctions committee that since February 1997 there had been a dramatic increase in the number of ships smuggling petroleum from Iraq inside Iranian territorial waters. Extended the Oil for Food program another 180 days (phase III). Raised Iraq’s export ceiling of oil to about $5.3 billion per 6-month phase (phase IV). Permitted Iraq to export additional oil in the 90 days from March 5, 1998, to compensate for delayed resumption of oil production and reduced oil price. Authorized Iraq to buy $300 million worth of oil spare parts to reach the export ceiling of about $5.3 billion. Public Law 105-235, a joint resolution finding Iraq in unacceptable and material breach of its international obligations. Oct. 31, 1998 U.S. legislation: Iraq Liberation Act Public Law 105-338, § 4, authorized the president to provide assistance to Iraqi democratic opposition organizations. Iraq announced it would terminate all forms of interaction with UNSCOM and that it would halt all UNSCOM activity inside Iraq. Renewed the Oil for Food program for 6 months beyond November 26 at the higher levels established by resolution 1153. The resolution included additional oil spare parts (phase V). Following Iraq’s recurrent blocking of U.N. weapons inspectors, President Clinton ordered 4 days of air strikes against military and security targets in Iraq that contribute to Iraq’s ability to produce, store, and maintain weapons of mass destruction and potential delivery systems. President Clinton provided the status of efforts to obtain Iraq’s compliance with U.N. Security Council resolutions. He discussed the MIF report of oil smuggling out of Iraq and smuggling of other prohibited items into Iraq. Renewed the Oil for Food program another 6 months (phase VI). Permitted Iraq to export an additional amount of $3.04 billion of oil to make up for revenue deficits in phases IV and V. Extended phase VI of the Oil for Food program for 2 weeks until December 4, 1999. Extended phase VI of the Oil for Food program for 1 week until December 11, 1999. Renewed the Oil for Food program another 6 months (phase VII). Abolished Iraq’s export ceiling to purchase civilian goods. Eased restrictions on the flow of civilian goods to Iraq and streamlined the approval process for some oil industry spare parts. Also established the United Nations Monitoring, Verification and Inspection Commission (UNMOVIC). Increased oil spare parts allocation from $300 million to $600 million under phases VI and VII. Renewed the Oil for Food program another 180 days until December 5, 2000 (phase VIII). Extended the Oil for Food program another 180 days (phase IX). Ambassador Cunningham acknowledged Iraq’s illegal re-export of humanitarian supplies, oil smuggling, establishment of front companies, and payment of kickbacks to manipulate and gain from Oil for Food contracts. Also acknowledged that the United States had put holds on hundreds of Oil for Food contracts that posed dual-use concerns. Ambassador Cunningham addressed questions regarding allegations of surcharges on oil and smuggling. Acknowledged that oil industry representatives and other Security Council members provided the United States anecdotal information about Iraqi surcharges on oil sales. Also acknowledged companies claiming they were asked to pay commissions on contracts. Extended the terms of resolution 1330 (phase IX) another 30 days. Renewed the Oil for Food program an additional 150 days until November 30, 2001 (phase X). The resolution stipulated that a new Goods Review List would be adopted and that relevant procedures would be subject to refinement. Renewed the Oil for Food program another 180 days (phase XI). UNMOVIC reviewed export contracts to ensure that they contain no items on a designated list of dual-use items known as the Goods Review List. The resolution also extended the program another 180 days (phase XII). MIF reported that there had been a significant reduction in illegal oil exports from Iraq by sea over the past year but noted oil smuggling was continuing. Extended phase XII of the Oil for Food program another 9 days. Renewed the Oil for Food program another 180 days until June 3, 2003 (phase XIII). Approved changes to the list of goods subject to review by the sanctions committee. Chairman reported on a number of alleged sanctions violations noted by letters from several countries and the media from February to November 2002. Alleged incidents involved Syria, India, Liberia, Jordan, Belarus, Switzerland, Lebanon, Ukraine, and the United Arab Emirates. Operation Iraqi Freedom is launched. Coalition operation led by the United States initiated hostilities in Iraq. Adjusted the Oil for Food program and gave the Secretary General authority for 45 days to facilitate the delivery and receipt of goods contracted by the Government of Iraq for the humanitarian needs of its people. Public Law 108-11, § 1503, authorized the President to suspend the application of any provision of the Iraq Sanctions Act of 1990. Extended provisions of resolution 1472 until June 3, 2003. End of major combat operations and beginning of post-war rebuilding efforts. Lifted civilian sanctions on Iraq and provided for the end of the Oil for Food program within 6 months, transferring responsibility for the administration of any remaining program activities to the Coalition Provisional Authority (CPA). Transferred administration of the Oil for Food program to the CPA. Responded to allegations of fraud by U.N. officials that were involved in the administration of the Oil for Food program. Proposed that a special investigation be conducted by an independent panel. Supported the appointment of the independent high-level inquiry and called upon the CPA, Iraq, and member states to cooperated fully with the inquiry. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Oil for Food program was established by the United Nations and Iraq in 1996 to address concerns about the humanitarian situation after international sanctions were imposed in 1990. The program allowed the Iraqi government to use the proceeds of its oil sales to pay for food, medicine, and infrastructure maintenance. The program appears to have helped the Iraqi people. From 1996 through 2001, the average daily food intake increased from 1,300 to 2,300 calories. From 1997-2002, Iraq sold more than $67 billion of oil through the program and issued $38 billion in letters of credit to purchase commodities. However, over the years numerous allegations have surfaced concerning potential fraud and program mismanagement. GAO (1) reports on its estimates of the illegal revenue acquired by the former Iraqi regime in violation of U.N. sanctions, (2) provides observations on program administration; and (3) describes the current and future challenges in achieving food security. GAO estimates that from 1997- 2002, the former Iraqi regime acquired $10.1 billion in illegal revenues, including $5.7 billion in oil smuggled out of Iraq and $4.4 billion through surcharges on oil sales and illicit commissions from suppliers exporting goods to Iraq through the Oil for Food program. This estimate includes oil revenue and contract amounts for 2002, updated letters of credit from prior years, and newer estimates of illicit commissions from commodity suppliers. The United Nations, through the Office of the Iraq Program (OIP) and the Security Council's Iraq sanctions committee, were both responsible for overseeing the Oil for Food Program. However, the Security Council allowed the Iraq government, as a sovereign entity, to negotiate contracts directly with purchasers of Iraqi oil and suppliers of commodities. This structure was an important factor in allowing Iraq to levy illegal surcharges and commissions. OIP was responsible for examining Iraqi contracts for price and value, but it is unclear how it performed this function. The sanctions committee was responsible for monitoring oil smuggling, screening contracts for items that could have military uses, and approving oil and commodity contracts. The sanctions committee took action to stop illegal oil surcharges, but it is unclear what actions it took on contract commissions. U.N. external audit reports contained no findings of program fraud. Summaries of internal audit reports pointed to some concerns regarding procurement, coordination, monitoring, and oversight and concluded that OIP had generally responded to audit recommendations. Ongoing investigations of the Oil for Food program may wish to further examine how the structure of the program enabled the Iraqi government to obtain illegal revenues, the role of member states in monitoring and enforcing the sanctions, actions taken to reduce oil smuggling, and the responsibilities and procedures for assessing price reasonableness in commodity contracts. Evolving policy and implementation decisions on the food distribution system and the worsening security situation have affected the movement of food commodities within Iraq. As a result, as of June 2004, food warehouse stocks are low and Iraq has less than a month's supply of essential food items, according to U.S. and World Food Program officials. In addition to these current food security challenges, the new government will have to balance the need to reform a costly food subsidy program with the need to maintain food stability and protect the poorest populations. Also, inadequate oversight and corruption in the Oil for Food program raise concerns about the Iraqi government's ability to manage the food distribution system and absorb $32 billion in expected donor funds for reconstruction. The coalition authority has taken steps, such as appointing inspectors general, to build internal controls and accountability measures in Iraq's ministries.
Under the Railroad Retirement Act of 1974, RRB makes independent determinations of railroad workers’ claimed T&P disability using the same general criteria that SSA uses to administer its Disability Insurance (DI) program—that is, the worker must have a medically determinable physical or mental impairment that (1) has lasted (or is expected to last) at least 1 year or is expected to result in death and (2) prevents them from engaging in substantial gainful activity, defined as work activity that involves significant physical or mental activities performed for pay or profit. Railroad workers determined to be eligible for benefits under the T&P program are not expected to be able to return to the workforce. The eligibility criteria for the T&P disability program differ from those of the RRB’s occupational disability program.for benefits under the occupational disability program may be able return to the workforce, but generally may not return to their original occupation. T&P disability benefits are payable to employees with at least 10 years (120 months) of creditable railroad service or to employees with 5 years (60 months) of creditable railroad service after 1995. SSA staff review about one-third of the cases that RRB has determined to be eligible for T&P benefits for which Social Security benefits may potentially be paid. In fiscal year 2012, RRB made 1,254 initial determinations under T&P standards. Of these initial determinations, 977 were approved for benefits. Workers determined to be eligible Claims representatives—staff located in RRB’s 53 field offices—assemble applications, and collect individuals’ employment and medical information needed to support the claim. Once assembled, claims representatives send the files to RRB headquarters for processing, and program eligibility determination. Claims examiners—staff located in RRB headquarters— review the case file documentation and periodically order additional medical examinations to determine whether a railroad worker is eligible for T&P benefits. RRB uses the same definition of disability and evaluates T&P claims using the same criteria SSA uses for the DI program. For example, RRB determines whether the claimant’s impairment is medically disabling. If the claims examiner determines that a claimant has an impairment that meets or equals SSA’s Listing of Impairments (which describes medical conditions that SSA has determined are severe enough to keep the claimant from performing any type of work), the examiner will find that the claimant is disabled. If the claimant’s impairment is not found medically disabling, RRB then determines whether the claimant is able to do his or her past work, or potentially any other work. Together, RRB and SSA coordinate the financing of T&P disability benefits, which totaled almost $276 million in fiscal year 2012. Doing so involves computing the amount of Social Security payroll taxes that would have been collected by certain Social Security Trust Funds if railroad employment had been covered directly by Social Security, as well as the amount of additional benefits which Social Security would have paid to railroad retirement beneficiaries during the same fiscal year. When benefits exceed payroll taxes, the difference, including interest and administrative expenses, is transferred from the Social Security Trust If taxes Funds to the RRB’s Social Security Equivalent Benefit Account.exceed benefit reimbursements, a transfer is made in favor of the Social Security Trust Funds. However, since 1959, such transfers have favored RRB. In the last month of fiscal year 2012, Social Security trust funds financed about 79 percent of total T&P disability benefits (see fig. 1). Once the T&P disability benefit has been awarded, RRB uses continuing disability reviews (CDRs) to determine whether beneficiaries remain eligible for benefits. These reviews can include a determination of whether an individual’s medical condition has improved to the point where he or she is no longer considered disabled, is capable of performing work (whether or not that work involves railroad employment) or whether the individual continues to earn income below allowable program limits. In fiscal year 2012, RRB completed 1,212 CDR activities. The T&P disability program is linked to RRB’s occupational disability program, in that claimants use the same application and medical evidence to apply for benefits under both programs. Figure 2 shows RRB’s disability T&P claims determination process, including how this process relates to the occupational disability claims process. In response to the LIRR fraud incident, RRB implemented a five point- plan to increase its oversight of LIRR employees who file for occupational disability benefits or who are currently receiving occupational disability benefits. Under the five point-plan, RRB: 1. orders its own medical exams for all LIRR claimants to supplement medical evidence provided, 2. conducts continuing disability reviews for all LIRR occupational disability annuitants age 54.5 and younger (as of October 21, 2008), 3. exercises greater oversight of its Westbury field office in Long Island through biweekly phone calls and quarterly visits, 4. collects and plans to analyze data for LIRR claims to detect any unusual patterns, such as impairments and treating physicians that appear more frequently, and 5. collects data on the extent to which LIRR management employees are applying for benefits under the program. In March 2013, RRB reported that a review of 519 occupational disability applications for LIRR employees being handled under the plan resulted in 496 of the applications being granted—an approval rate of about 96 percent. According to the RRB OIG, the approval rate of LIRR occupational disability applicants remains essentially unchanged from when the LIRR fraud incident was made public and is indicative of systemic problems within the program. The procedures RRB uses to verify a claimant’s work and earnings and the severity and duration of physical or mental impairments are inadequate to ensure that only eligible claimants qualify for T&P benefits. Standards for Internal Control in the Federal Government states that agencies should ensure that all transactions and other significant events are clearly documented. This would include determinations that claimants are entitled to benefits under the RRA. Such documentation could facilitate tracing these actions from initiation through completion of the Therefore, current and complete information final claim determination.about a claimant’s work and earnings history and alleged impairment is critical to establishing not only whether claimants are eligible for benefits, but also the correct benefit amount to be paid. Earnings File (MEF).contained within the MEF are for the last complete calendar year, and as a result, the data that RRB uses to determine eligibility may lag behind actual earnings by up to 12 months. However, the most recent earnings information Although RRB requires that claims examiners perform a detailed query of reported earnings in the MEF at the time of their initial determination, our review of case files showed that some determinations were based on data that were as much as 10 months old. Three of the 10 case files we reviewed also did not include sufficient information to allow us to determine what additional steps, if any, were taken to verify that earnings in the year the claims were filed did not exceed program limits. In one of the case files, the claims examiner appeared to rely solely on the claimant’s statement that there were no current earnings and in two others, the claimant provided no information on current year earnings. In discussing this issue, RRB officials noted that claims examiners routinely perform an electronic query of the MEF before a claim is approved for payment but may neglect to subsequently include a printout of the query in the case file. Regardless of this explanation, the absence of this documentation made it difficult to confirm that such queries were performed. As a result, without complete documentation of all evidence used to arrive at the initial determination, RRB lacks the ability to provide reasonable assurance that these determinations are being made in accordance with RRB policies and comply with relevant regulations. Although more current information on work and earnings are available, RRB has not explored these sources of information for T&P claims. According to RRB, its annual match of current RRB beneficiaries against the MEF, which helps target cases for CDRs, could help flag earnings that may go undetected prior to an initial claim being awarded. However, RRB has not reviewed this information to determine if the time lag for reported earnings has resulted in the award of ineligible claims or potential overpayments. In addition, the Department of Health and Human Services’ National Directory of New Hires (NDNH)—established in part to help states enforce child support orders against noncustodial parents— contains quarterly state wage information which is also more recent than the annual wage information included in the MEF. The NDNH also includes data from state directories of new hires, state records of unemployment insurance benefits paid, and federal agency payroll data, all of which can be used to help establish a more recent picture of a claimant’s work and earnings. Access to the NDNH is limited by statute. Although RRB does not have specific legal authority to access it, RRB has considered obtaining access to the NDNH for the purpose of fraud prevention in its unemployment and sickness benefit programs, but abandoned the effort, citing cost as a key reason for not pursuing it further. In May 2011, the agency revisited the issue in the wake of the LIRR fraud incident. Specifically, in May 2011, RRB studied the feasibility of using the NDNH to monitor the earnings level of individuals who are already receiving disability benefits. RRB concluded that it would not provide significant benefits due to costs associated with accessing the database and redesigning its processes; the need for legislation to grant RRB access; and potentially higher workloads, but did not quantify the potential financial benefits. Furthermore, the study did not address whether it would be cost-effective to use the NDNH to obtain more current information about the earnings of people who apply for benefits. SSA, which has legal authority to access the NDNH, currently uses the NDNH to periodically monitor the earnings of those receiving Supplemental Security Income benefits, and to investigate the current, or recent alleged work activity that is not yet posted to the MEF for DI applicants and beneficiaries. Other tools for verifying claimants’ earnings may be available from the public sector, such as The Work Number—a commercial, publicly-available data source. RRB procedures provide claims staff flexibility to decide when to verify medical evidence. Field office claims representatives are responsible for assembling claim files and ensuring they are complete before forwarding them to headquarters for evaluation; however, RRB does not require them to verify that medical evidence is obtained from a reliable source. Specifically, RRB’s field operations manual advises claims representatives to “make no judgment as to the acceptability of a medical source” and that, in the unlikely situation that the only medical-related evidence is from a source considered unacceptable, a headquarters claims examiner is to direct any further development of medical evidence. notes that most claimants have impairments that are either continuing or worsening, and directs them to avoid unnecessary claims development. RRB Field Operations Manual, 1305.35.2. application to headquarters with an annotation asking that medical examinations be ordered by the appropriate claims examiner. Although claims representatives are a potentially valuable source of first- hand information about individuals filing T&P claims, and often represent the agency’s sole direct contact with claimants, our interviews and review of claims files showed that claims representatives’ observations are used infrequently. When compiling case files, claims representatives can include comments in designated parts of the application form, the customer contact log, or in the form used to transmit the file to headquarters. For example, a claimant who reports having a condition that prevents them from walking without assistance could be observed walking without a cane, or a claimant may present evidence of being unable to sleep without mentioning associated behaviors such as memory loss or difficulty concentrating that are clearly evident to the claims reviewer. However, of the 10 cases we reviewed, 1 included remarks from a claims representative documenting observations about the claimant’s physical symptoms and another included observations regarding possible self-employment that the claimant had denied. In a third case, the transmittal notes appear to have been added after the claim had already been transmitted. RRB’s online customer contact log, used to track all interactions with claimants, can also be used to record observations, including whether observed behavior contradicts medical evidence. Although claims representatives and district managers said that field staff may record their observations about the claimant in customer contact logs, claims examiners told us that they review these remarks only for the purpose of determining whether there is further evidence to be forwarded. According to staff in the Westbury district office, a formal process for flagging suspicious medical evidence may have allowed the agency to flag potentially fraudulent claims from LIRR workers for additional scrutiny by claims examiners. RRB policies and procedures do not require that all initial determinations are reviewed by an independent person to ensure that there is sufficient evidence to support the determination. Standards for Internal Control in the Federal Government states that agencies should ensure that key duties and responsibilities are divided or segregated among different people to reduce the risk of error, waste, or fraud. However, RRB’s policies and procedures allow for discretion at the field office level regarding how complete the case file must be before it is forwarded to headquarters for a determination, and these files are subject to different levels of supervisory review. Some district managers stated that they make a point of reviewing virtually all claims developed in their offices before they are mailed to headquarters, while others review only a required sample of 10 percent of all claims, including those for other RRB benefit programs, chiefly due to workload volume or competing duties. In our review of case files, we observed that required information, such as the year that a claimant last attended school or when the application was signed, was missing from 4 of the 10 applications in the physical files we reviewed. In another case, the medical evidence was over 12 months old and new evidence was not developed, as required. Incomplete case file information raises questions about whether all the relevant information was properly considered in these cases. At the determination level, RRB policy allows for some claims to be approved without independent supervisory review. RRB policies generally allow examiners to use their judgment to decide which cases do not require independent review because an individual’s ailment meets or exceeds SSA’s listing of impairments.claims examiner can self-authorize the claim. Consequently, in recent years, about one-quarter to one-third of all T&P initial claims were approved by the same claims examiner that reviewed the application (see fig. 3). Such claims may be problematic if there is an error in judgment on the part of the claims examiner. RRB’s T&P program oversight process does not evaluate the accuracy of disability determinations or provide managers with regular feedback about the effectiveness of the determination process. According to RRB’s strategic plan and agency officials, RRB’s key program objectives are to make accurate and timely determinations and payments, and to pay accurate benefits to eligible claimants. RRB’s Program Evaluation and Management Services periodically conducts reviews of selected aspects of the T&P program; however, rather than providing routine feedback about the quality and accuracy of the determination process, these evaluations are narrowly focused and according to RRB officials, generally examine compliance with procedures and guidance. RRB’s oversight is primarily focused on checking the accuracy of payment amounts, periodically reviewing policy compliance, and assessing the continued eligibility of already approved beneficiaries using CDRs and death record matching. While RRB conducts annual reviews of benefit these reviews only measure the percentage of dollars paid accuracy,correctly and do not evaluate whether the medical evidence supported the claimed disability or whether the process RRB used to establish eligibility led to an accurate determination. Occasionally, the division conducts full reviews of selected determinations, including the medical evidence; however, such reviews have been conducted on an ad hoc basis for internal purposes, have taken place after payments were initiated, and have focused on certain types of determinations, such as self-authorized claims. Standards for Internal Control in the Federal Government states that agency management should assess and continually monitor program performance to provide reasonable assurance that the agency is achieving its objectives. According to RRB officials, the CDR program is the agency’s response to major program integrity issues identified in the T&P program—though the number of work and medical CDRs completed has declined in recent years. For example, in fiscal year 2009, RRB conducted 610 work and medical CDRs; however, that number had declined to 235 in fiscal year 2012 (see fig. 4). According to RRB officials, the decline was, in part, a result of a corresponding decline in the number of staff reviewers. In contrast to RRB, SSA monitors quality in its similar DI program by reviewing samples of disability determinations for accuracy prior to initiating benefit payments. SSA’s Office of Quality Review, which is in a separate division from initial claims examiners, conducts two types of quality assurance reviews prior to initiating disability payments: (1) ongoing quality assurance (QA) reviews, and (2) preeffectuation (PER) reviews, which are intended to detect and correct improper disability determinations prior to benefits being paid. For its QA review, SSA pulls a random sample of 70 determinations and 70 denials per calendar quarter per state. For its PER review, SSA pulls a sample of cases predicted to be most likely to contain errors and represents 50 percent of all disability approvals. For both reviews, QA staff evaluate the sampled cases to ensure that the medical evidence supports the claimed disability and that the evidence and the determination conform to SSA operating policies and procedures. QA staff then communicate any errors and determination reversals to the initial examiners and use the information collected, including how and where errors occurred, to provide general feedback on program performance. The lack of routine up-front reviews of determination accuracy and the quality of the determination process leaves RRB at risk of paying benefits to ineligible individuals. One RRB official attributed RRB’s lack of such oversight to the agency’s belief that delaying benefits to conduct accuracy reviews before sending the payment would be detrimental to customer service. According to OIG officials, RRB places greater focus on paying benefits than with ensuring benefits are warranted, and noted that if RRB strengthened its quality assurance framework prior to disability approval and payment, fewer improper claims would be awarded. RRB does not publicly report its approval rate for disability claims in its annual performance and accountability report; however, as noted above, RRB approved benefits in 78 percent of T&P cases decided in fiscal year 2012 (977 of 1,254). RRB also lacks agency-wide performance goals that emphasize the importance of determination accuracy. Specifically, RRB has established performance goals that track the timeliness of disability determinations and payments, but has not established goals that track whether the right people are being awarded benefits over time. Federal law requires that agencies establish outcome-related goals and objectives that relate to key agency priorities. For the T&P program, RRB’s strategic goals focus on the accuracy of payment calculations, the timeliness of the disability determination process, and the timeliness of payments. However, RRB’s performance goals do not measure or track the accuracy of disability determinations—in other words, whether benefits are being correctly awarded or denied. In its similar DI program, SSA has a performance goal that tracks the accuracy rate for initial disability determinations over time, in addition to goals that track timeliness. SSA’s accuracy rate goal measures the percentage of determinations that contained errors as identified during their regular quality assurance process reviews, and SSA sets its fiscal year 2013 target accuracy rate at 97 percent. Without similarly tracking and reporting on the accuracy of T&P disability determinations in addition to measuring payment accuracy and timeliness, RRB does not know whether it is paying benefits only to eligible individuals and cannot observe trends over time. RRB has not engaged in a comprehensive effort to continuously identify and prevent potential fraud program-wide even after the high-profile LIRR incident exposed fraud as a key program risk. Fraud that has occurred in the occupational disability program may suggest a broader risk of fraud in RRB’s disability programs because medical documentation in a claimant’s case file may be used to justify either occupational or T&P benefits. According to OIG officials, doctors often document occupational disabilities in such a way that a claimant would also qualify for the T&P program. In addition, RRB officials stated that, while randomly assigning claims for examination at headquarters is intended to prevent collusion between examiners and claims representatives in the field offices, it also limits the ability of examiners to recognize patterns of potential fraud, which RRB officials noted was made apparent by the LIRR incident. Since that incident, RRB has increased its scrutiny of claims from LIRR workers—for example, by ordering more consultative medical exams. However, its other actions to improve fraud awareness and prevention have been limited and narrowly focused. RRB hired an analyst to conduct ongoing reviews of agency data to identify patterns that suggest potential fraud, but the analyst’s work has thus far been focused on the occupational disability program. In 2011, RRB also conducted an analysis of 89 cases of proven fraud in its occupational and T&P disability programs to identify common characteristics that could aid in identifying at-risk cases earlier in the process, but RRB did not draw any conclusions about new ways to identify potential fraud and, as a result, did not make any system-wide changes to the determination process. RRB officials stated that this work was not intended to lead to changes in the process, but to identify other areas for examination. In addition, while OIG officials said they have encouraged RRB staff to refer suspicious claims to the OIG’s office before approving disability benefits—instead of chasing the benefit in a subsequent fraud investigation—RRB has not referred any suspicious claims to the OIG in the year and a half since that guidance was provided. RRB’s efforts to identify and prevent potential fraud have been limited and are focused primarily on claims submitted by LIRR workers, which may leave RRB’s T&P program vulnerable to risks of fraud. Standards for Internal Control in the Federal Government states that agencies should identify program risks from both internal and external sources, analyze possible effects, and take action to manage the risks. According to RRB officials, since the LIRR incident, the agency has become highly alert to potential abuse and is in the process of evaluating and implementing safety measures. However, OIG officials stated that RRB’s process for reviewing medical evidence and making disability determinations still does not allow RRB to effectively identify potential medical fraud because the agency does not have sufficient medical expertise on staff and the process does not include reviews of initial determinations by independent doctors. According to RRB officials, the 10 cases we reviewed included 8 that had been reviewed by SSA’s medical experts and 2 that had been sent to the RRB’s medical contractors. While 8 of the case files we reviewed contained copies of SSA’s own disability determinations that were reviewed by SSA’s doctors, the same case files did not contain evidence that SSA doctors had reviewed RRB’s T&P disability determinations. Moreover, RRB management has not fostered an environment of fraud awareness throughout the agency. While RRB has initiated fraud awareness training, agency participation has been incomplete and updates and refreshers have been sporadic. The training program has included instructor-led sessions for headquarters staff and recorded modules for field service personnel. According to RRB, 59 of about 566 headquarters staff completed fraud awareness training in 2011, including all members of the Disability Benefits Division, and all 53 field offices reported viewing the recorded program. However, claims representatives in all four of the district offices that we contacted said they had not received any training directly related to fraud awareness. RRB officials stated that they relied on manager certification that the training was completed in 2011 and thought that staff we interviewed may have forgotten about the training since it was more than 2 years ago. After learning of our findings, RRB officials issued a directive to all network managers to confirm by March 2014 that all field staff had completed fraud awareness training. According to RRB, 29 claims examiners and analysts at headquarters also participated in a two-part class that revisited fraud topics in 2013; however, this follow-up course was not offered to field offices. In addition, agency officials stated that RRB’s fraud awareness training has been ad hoc and that no annual refresher courses are required of, or have been offered to staff. In addition to the training courses, RRB officials stated that they distributed a twice-yearly newsletter intended to heighten fraud awareness. The newsletters provide examples of disability fraud in the news and links to the OIG website, but do not include messages from management or information regarding other training resources. Despite RRB’s efforts, claims representatives in two of the four district offices we contacted said that it was not their job to be on the lookout for potential fraud. For example, one claims representative said that even if something suspicious appears on an application and a claimant has signed the application, the claims representative has no responsibility to draw attention to the suspicious information since it is the responsibility of headquarters’ staff to evaluate claimants’ answers. In addition, a district manager from the same office stated that even when faced with obvious patterns of potentially fraudulent activity in the past, claims representatives had no mechanism by which to flag the issue and generally have not been encouraged to do so. Without agency-wide commitment to be alert to potential fraud, including having the tools and training to identify suspicious claims, RRB may not have sufficient information or context to make accurate disability determinations and improper payments may result. The RRB’s total and permanent disability program provides an important safety net for individuals who are unable to work due to a disability. The program provided $276 million in benefits to 12,970 beneficiaries in fiscal year 2012 alone. While our review shows that RRB has taken some steps to address potential fraud within the program, its existing policies and processes impede its ability to prevent improper payments or to detect and prevent fraudulent claims system-wide. In particular, RRB’s continued reliance on outdated earnings information to identify beneficiaries who may not be eligible for benefits, or on insufficient medical evidence to make accurate initial determinations means that the agency cannot ensure it is able to detect and prevent improper payments including some that can potentially be very large. As a result, RRB has placed itself in a “pay and chase” mode that stretches limited staff and budgetary resources. Absent more timely sources of earnings data and high-quality medical claim information to inform the determination process, this problem is likely to persist. In addition, the agency is further at risk due to its policies that allow claims examiners to unilaterally approve selected claims without independent supervisory review. We recognize that ensuring the integrity of the T&P disability process presents a challenge for RRB. However, the lack of a robust quality assurance and continuous improvement framework has hindered RRB’s ability to identify potential program integrity risks and aspects of the process that need to be improved. Absent comprehensive agency-wide performance goals and metrics to track and report on the accuracy of T&P determinations, RRB is also limited in its ability to monitor the extent to which the agency is making correct determinations and reduce its exposure to making improper payments. Finally, without clear policies and procedures for detecting, preventing, and addressing potentially fraudulent claims, RRB is unable to ensure the integrity of its process system-wide and that known program risks have been addressed. The weaknesses we identified in RRB’s existing determination processes and policies require sustained management attention and a more proactive stance by the agency. Without such a commitment to fraud awareness and prevention, fraudulent claims may go undetected, and the agency risks undermining public confidence in its ability to administer the important programs under its jurisdiction. To enhance RRB’s ability to prevent improper payments and deter fraud in the T&P disability program, we recommend that the Railroad Retirement Board Members direct RRB staff to: 1. explore options to obtain more timely earnings data to ensure that claimants are working within allowable program limits prior to being awarded benefits; 2. revise the agency’s policy concerning the supervisory review and approval of determinations to ensure that all T&P cases are reviewed by a second party; 3. strengthen oversight of the T&P determination process by establishing a regular quality assurance review of initial disability determinations to assess the quality of medical evidence, determination accuracy, and process areas in need of improvement; 4. develop performance goals to track the accuracy of disability 5. develop procedures to identify and address cases of potential fraud before claims are approved, requiring annual training on these procedures for all agency personnel, and regularly communicating management’s commitment to these procedures and to the principle that fraud awareness, identification, and prevention is the responsibility of all staff. We obtained written comments on a draft of this report from the Railroad Retirement Board. RRB agreed with all five of the recommendations we made to strengthen its management controls over the T&P disability determination process, and noted that it has already taken steps to implement the report’s recommendations directed at improving the agency’s ability to detect and deter fraud. RRB’s formal comments are reproduced in appendix I. RRB also provided additional technical comments, which have been incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Railroad Retirement Board, relevant congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, David Lehrer (Assistant Director), Arthur T. Merriam Jr. (Analyst-in-Charge), Carl Barden, Sue Bernstein, Jeremy Cox, Patrick Dibattista, Justin Dunleavy, Holly Dye, Alexander Galuten, Michael Kniss, Theresa Lo, Sheila McCoy, Jean McSween, Lorin Obler, Regina Santucci, Walter Vance made key contributions to this report. Supplemental Security Income: SSA Has Taken Steps to Prevent and Detect Overpayments, but Additional Actions Could Be Taken to Improve Oversight. GAO-13-109. Washington, D.C.: December 14, 2012. Disability Insurance: SSA Can Improve Efforts to Detect, Prevent, and Recover Overpayments. GAO-11-724. Washington, D.C.: July 27, 2011. Use of the Railroad Retirement Board Occupational Disability Program across the Rail Industry. GAO-10-351R. Washington, D.C.: February 4, 2010. Railroad Retirement Board: Review of Commuter Railroad Occupational Disability Claims Reveals Potential Program Vulnerabilities. GAO-09-821R. Washington, D.C.: September 9, 2009. Railroad Retirement Board Disability Determinations. GAO/HRD-84-11. Washington, D.C.: July 20, 1984.
In recent years, the U.S. Department of Justice has investigated and prosecuted railroad workers who were suspected of falsely claiming over $1 billion in disability benefits, raising concerns about RRB's disability claims process. GAO was asked to evaluate the integrity of RRB's disability program. This report examines (1) whether RRB's policies and procedures for processing claims were adequate to ensure that only eligible claimants receive T&P disability benefits; and (2) the extent to which RRB's management strategy ensures that approved claims are accurate and addresses program risks. To answer these questions, GAO reviewed T&P determination policies and procedures, interviewed RRB officials in headquarters and four district offices—selected for geographic dispersion—reviewed relevant federal laws and regulations, and reviewed a nongeneralizable random sample of 10 T&P cases that were approved in fiscal year 2012 to illustrate RRB's claims process. The Railroad Retirement Board's (RRB) policies and procedures for processing total and permanent (T&P) disability benefit claims do not adequately ensure that claimants meet program eligibility requirements. To find a railroad worker eligible for T&P benefits, RRB makes an independent determination of disability using the same general criteria that the Social Security Administration (SSA) uses to administer its Disability Insurance (DI) program—that is, a worker must have a medically determinable physical or mental impairment that (1) has lasted (or is expected to last) at least 1 year or is expected to result in death and (2) prevents them from engaging in substantial gainful activity, defined as work activity that involves significant physical or mental activities performed for pay or profit. RRB's policy states that, to establish eligibility for financial benefits, examiners should assess medical records for evidence that a claimant is too severely disabled to maintain gainful employment, and establish that a claimant's earnings fall below a certain threshold. However, the procedure for establishing if claimants meet the income threshold relies on SSA earnings data that can be up to 1 year old. Sources of more timely earnings information, such as the Department of Health and Human Services' National Directory of New Hires and The Work Number , exist and include both non-railroad and self-employment earnings, but RRB has not sufficiently explored the possibility of using them to help establish eligibility for T&P disability benefits. In addition, RRB lacks a policy to require independent supervisory review for all claims determinations. As a result, the procedures that claims examiners use to review a claim also allow them sole discretion to decide whether to approve it. Between 2008 and 2012, RRB data show that about one-quarter to one-third of T&P claims are considered and approved without independent supervisory review. According to generally accepted standards for internal controls in the federal government, essential tasks—such as establishing and determining that benefits should be awarded—should be performed by separate individuals to reduce the risk of fraud. RRB's strategy for post-eligibility quality assurance review is inadequate to ensure that disability determinations for approved claims are accurate and does not address program risks due to potential fraud. While RRB checks the accuracy of payment amounts, and periodically reviews compliance with its policies, it does not evaluate the accuracy of disability determinations made or regularly monitor the effectiveness of the determination process. Similarly, performance goals for the disability program focus on measures of timeliness and do not track the accuracy of determinations made. The agency also has not engaged in a comprehensive effort to continuously identify fraud within the program, even after a high-profile incident exposed fraud as a key program risk. RRB has conducted some analyses to identify patterns in claims data that may suggest potential fraud, but the work has not led to new practices in the T&P program. Finally, while RRB officials stated that the agency has developed and provided some fraud awareness training, staff in all four of the district offices that GAO interviewed did not recall receiving this training, and some stated that it was not their responsibility to be alert for potential fraud, further limiting RRB's ability to ensure it is paying benefits only to eligible claimants. GAO recommends that RRB explore options for obtaining more timely earnings information; revise its policy concerning the supervisory review of disability claims; establish a regular quality assurance review of T&P disability decisions; develop a performance goal to track decision accuracy; and develop and implement fraud awareness policies, procedures, and annual training. RRB agreed with these recommendations.
Fiscal year 2011 marked the eighth year of implementation of the Improper Payments Information Act of 2002 (IPIA), as well as the first year of implementation for the Improper Payments Elimination and Recovery Act of 2010 (IPERA). IPIA requires executive branch agencies to annually review all programs and activities to identify those that are susceptible to significant improper payments, estimate the annual amount of improper payments for such programs and activities, and report these estimates along with actions taken to reduce improper payments for programs with estimates that exceed $10 million. IPERA, enacted July 22, 2010, amended IPIA by expanding on the previous requirements for identifying, estimating, and reporting on programs and activities susceptible to significant improper payments and expanding requirements for recovering overpayments across a broad range of federal programs. IPERA included a new, broader requirement for agencies to conduct recovery audits, where cost effective, for each program and activity with at least $1 million in annual program outlays. This IPERA provision significantly lowers the threshold for required recovery audits from $500 millionprograms and activities. Another new IPERA provision calls for federal agencies’ inspectors general to annually determine whether their respective agencies are in compliance with key IPERA requirements and to report on their determinations. Under Office of Management and Budget (OMB) implementing guidance, these reports are required to be completed within 120 days of the publication of the federal agencies’ annual PAR or AFR, with the fiscal year 2011 reports for most agencies due on March 15, 2012. to $1 million and expands the scope for recovery audits to all OMB continues to play a key role in the oversight of the governmentwide improper payments problem. OMB has established guidance for federal agencies on reporting, reducing, and recovering improper payments has established various work groups responsible for developing recommendations aimed at improving federal financial management activities related to reducing improper payments. OMB, Circular No. A-136 Revised, Financial Reporting Requirements (Oct. 27, 2011); OMB Memorandum M-11-16, Issuance of Revised Parts I and II to Appendix C of OMB Circular A-123 (Apr. 14, 2011); OMB Memorandum M-11-04, Increasing Efforts to Recapture Improper Payments by Intensifying and Expanding Payment Recapture Audits (Nov. 16, 2010); and OMB Memorandum M-10-13, Issuance of Part III to OMB Circular A- 123, Appendix. C (Mar. 22, 2010). Federal agencies reported improper payment estimates totaling $115.3 billion in fiscal year 2011, a decrease of $5.3 billion from the revised prior year reported estimate of $120.6 billion. Based on the agencies’ estimates, OMB estimated that fiscal year 2011 improper payments comprised about 4.7 percent of the $2.5 trillion in total spending during that year for the agencies’ related programs (i.e., a 4.7 percent error rate). The decrease in the fiscal year 2011 estimate is attributed primarily to decreases in program outlays for the Department of Labor’s Unemployment Insurance program, and decreases in reported error rates for fiscal year 2011 (compared to fiscal year 2010) for the Department of the Treasury’s (Treasury) Earned Income Tax Credit program and the Department of Health and Human Services’ (HHS) Medicare Advantage program. The $115.3 billion in estimated federal improper payments reported for fiscal year 2011 was attributable to 79 programs spread among 17 agencies. Ten of these 79 programs account for most of the $115.3 billion of reported improper payments. Specifically, as shown in table 1, these 10 programs accounted for about $107 billion or 93 percent of the total estimated improper payments agencies reported for fiscal year 2011. The 10 programs with the highest error rates had rates ranging from 11.0 percent to 28.4 percent. Specifically, as shown in table 2, those 10 programs accounted for $45 billion, or 39 percent of the total estimated improper payments for fiscal year 2011. Since the implementation of IPIA in 2004, federal agencies have continued to identify new programs or activities as risk-susceptible and report estimated improper payment amounts. The fiscal year 2011 governmentwide estimate of $115.3 billion included improper payment estimates for nine additional programs that did not report an estimate in fiscal year 2010, with the HHS Medicare Prescription Drug Benefit (Part D) program having the highest estimate of the newly included programs. We view these agencies’ efforts as a positive step towards increasing the transparency of the magnitude of improper payments. However, three additional programs providing estimates in fiscal year 2011 were not included in the governmentwide totals because their estimation methodologies were still under development. The three excluded programs were the: Department of Education’s Direct Loan, Department of Defense’s (DOD) Defense Finance and Accounting Service Commercial Pay, and DOD’s Army Corps of Engineers Commercial Pay. A number of federal agencies have reported progress in reducing improper payment error rates in some programs and activities. For example, we identified 40 federal agency programs, or about 50 percent of the total programs reporting improper payment estimates in fiscal year 2011, that reported a reduction in the error rate of estimated improper payments in fiscal year 2011 when compared to fiscal year 2010 error rates. We caution, however, that these rates have not been independently verified or audited. The following are examples of agencies that reported reductions in program error rates and estimated improper payment amounts (along with corrective actions to reduce improper payments) in their fiscal year 2011 PARs, AFRs, or annual reports. Treasury reported that the fiscal year 2011 Earned Income Tax Credit (EITC) program’s estimated improper payment amount decreased from the fiscal year 2010 amount of $16.9 billion to $15.2 billion, which represented a decrease in the error rate from 26.3 percent to 23.5 percent. Treasury reported that corrective actions taken to reduce improper payments primarily focused on completing examinations on tax returns that claimed the EITC before issuing the EITC portion of the refund, identifying math or other statistical irregularities in taxpayer returns, and comparing income information provided by the taxpayer with matching information from employers to identify discrepancies. HHS reported that the fiscal year 2011 estimated improper payment amount for the Medicare Advantage (Part C) program decreased from the fiscal year 2010 reported amount of $13.6 billion to $12.4 billion, which represented a decrease in the error rate from 14.1 percent to 11.0 percent. HHS reported that it reduced payment errors by continuing to routinely implement controls in its payment system to ensure accurate and timely payments, and implementing three key initiatives, including contract level audits, physician outreach, and Medicare Advantage organization guidance and training. In addition, agencies have further developed the use of recovery audits to recapture improper payments. In 2010, the President set goals, as part of the Accountable Government Initiative, for federal agencies to reduce overall improper payments by $50 billion, and recapture at least $2 billion in improper contract payments and overpayments to healthcare providers, by the end of fiscal year 2012. For fiscal year 2011, OMB reported that governmentwide agencies recaptured $1.25 billion in overpayments to contractors and vendors. Over half of this amount, $797 million, can be attributed to the Medicare recovery audit contractor program which identifies improper Medicare payments—both overpayments and underpayments—in all 50 states. Cumulatively, OMB reported $1.9 billion recaptured from overpayments to contractors, vendors, and healthcare providers for fiscal years 2010 and 2011 towards the President’s goal of recapturing at least $2 billion by the end of fiscal year 2012. Despite reported progress in reducing estimated improper payment amounts and error rates for some programs and activities during fiscal year 2011, the federal government continues to face challenges in determining the full extent of improper payments. Specifically, some agencies have not yet reported estimates for all risk-susceptible programs and some agencies’ estimating methodologies need to be refined. We have also found that internal control weaknesses exist, heightening the risk of improper payments occurring. Until federal agencies are able to implement effective processes to completely and accurately identify the full extent of improper payments and implement appropriate corrective actions to effectively reduce improper payments, the federal government will not have reasonable assurance that the use of taxpayer funds is adequately safeguarded. We are currently working on engagements related to improper payment reporting at both DOD and HHS. Furthermore, as I will discuss later in this statement, additional analysis is needed to assess the root causes of improper payments, a key factor in identifying and implementing effective corrective actions. We found that not all agencies have developed improper payment estimates for all of the programs and activities they identified as susceptible to significant improper payments. Specifically, three federal entities did not report fiscal year 2011 estimated improper payment amounts for four risk-susceptible programs. In one example, HHS’s fiscal year 2011 reporting cited various statutory barriers that hindered it from reporting improper payment estimated amounts. HHS cited statutory limitations for its state-administered Temporary Assistance for Needy Families (TANF) program, which prohibited it from requiring states to participate in developing an improper payment estimate for the TANF program. Despite these limitations, HHS officials stated that they will continue to work with states and explore options to allow for future estimates for the program. For fiscal year 2011, the TANF program reported outlays of about $17 billion. For another program, HHS cited the Children’s Health Insurance Program Reauthorization Act of 2009 as prohibiting HHS from calculating or publishing any national or state- specific payment error rates for the Children’s Health Insurance Program (CHIP) until 6 months after the new payment error rate measurement rule became effective on September 10, 2010. According to its fiscal year 2011 agency financial report, HHS plans to report estimated improper payment amounts for CHIP in fiscal year 2012. For fiscal year 2011, the CHIP program reported federal outlays of about $9 billion. As previously discussed, OMB excluded estimated improper payment amounts for two DOD programs from the governmentwide total because those programs were still developing their estimating methodologies— Defense Finance and Accounting Service (DFAS) Commercial Payfiscal year 2011 outlays of $368.5 billion and U.S. Army Corps of Engineers Commercial Pay with fiscal year 2011 outlays of $30.5 billion. with In DOD’s fiscal year 2011 agency financial report, DOD reported that improper payment estimates for these programs were based on improper payments detected through various pre-payment and post-payment review processes rather than using methodologies similar to those used for DOD’s other programs, including statistically valid random sampling or reviewing 100 percent of payments. In its fiscal year 2011 agency financial report, DOD stated that it plans to begin statistical sampling of the Commercial Pay program in fiscal year 2012. Both GAO and the DOD Inspector General (IG) have previously reported on weaknesses in DOD’s payment controls, including weaknesses in its process for assessing the risk of improper payments and reporting estimated amounts. DOD’s payment controls are hindered by problems related to inadequate payment processing, poor financial systems, and inadequate supporting documentation. Nonetheless, the DOD Comptroller testified in May 2011 that DOD assessed its commercial payment program as low risk because DOD management has concluded that it had a highly effective pre-payment examination process. That process includes a software tool that tests for potential improper payments before disbursement. However, the DOD IG has reported that the tool had a false positive rate of more than 95 percent and that its use was not standardized across payment systems. Additionally, the DOD IG reported that DOD’s risk of making improper payments was high and identified deficiencies in DOD’s estimate of high- dollar overpayments that caused it to underreport the amount of improper payments made.statistically valid estimating process for its commercial payments and addresses the known control deficiencies in its commercial payment processes, the governmentwide improper payment estimates are not complete. Until DOD fully and effectively implements a For fiscal year 2011, two agency auditors reported on compliance issues with IPIA and IPERA as part of their 2011 financial statement audits. Specifically, the Department of Agriculture (USDA) auditors identified noncompliance with the requirements of IPERA regarding the design of program internal controls related to improper payments. In the other noncompliance issue, while HHS estimated an annual amount of improper payments for some of its risk-susceptible programs, a key requirement of IPIA, it did not report an improper payment estimate for its TANF and CHIP programs for fiscal year 2011. Fiscal year 2011 marked the eighth consecutive year that auditors for HHS reported noncompliance issues with IPIA. A number of actions are under way across the federal government to help advance improper payment reduction goals. These initiatives, as well as additional actions in the future, will be needed to advance the federal government efforts to reduce improper payments. Identifying and analyzing the root causes of improper payments is key to developing effective corrective actions and implementing the controls needed to advance the federal government’s efforts to reduce and prevent improper payments. In this regard, implementing strong preventive controls can serve as the front-line defense against improper payments. Proactively preventing improper payments increases public confidence in the administration of benefit programs and avoids the difficulties associated with the “pay and chase”example, addressing program design issues that are a factor in causing improper payments may be an effective preventive strategy to be aspects of recovering overpayments. For considered. Effective monitoring and reporting can also help detect emerging issues. In addition, agencies can also enhance detective controls to identify and recover overpayments. For instance, enhancing incentives for grantees, such as state and local governments, could help increase attention to preventing, identifying, and recovering improper payments. Agencies cited a number of causes for the estimated $115.3 billion in reported improper payments, including insufficient documentation; incorrect computations; changes in program requirements; and, in some cases, fraud. Beginning in fiscal year 2011, according to OMB’s guidance, agencies were required to classify the root causes of estimated improper payments into three general categories for reporting purposes: (1) documentation and administrative errors, (2) authentication and medical necessity errors, and (3) verification errors. Information on the root causes of the current improper payment estimates is necessary for agencies to target effective corrective actions and implement preventive measures. While agencies generally reported some description of the causes of improper payments for their respective programs in their fiscal year 2011 reports, many agencies did not use the three categories to classify the types of errors and quantify how many errors can be attributed to that category. Of the 79 programs with improper payment estimates in fiscal year 2011, we found that agencies reported the root causes information using the required categories for 42 programs in their fiscal year 2011 PARs and AFRs. Together, these programs represented about $46 billion, or 40 percent of the total reported $115 billion in improper payment estimates for fiscal year 2011. Of the $46 billion, the estimated improper payment amounts were spread across the three categories, with documentation and administrative errors being cited most often. We did not calculate the dollar amounts in each category due to the imprecise narratives included in some of the agencies’ reporting of identified causes, which would have required more detailed information and/or detailed examination of the underlying data. Nonetheless, additional analysis regarding the root causes is needed in order to identify and implement effective corrective and preventive actions in the various programs. Many agencies and programs are in the process of implementing preventive controls to avoid improper payments, including overpayments and underpayments. Preventive controls may involve a variety of activities such as upfront validation of eligibility, predictive analytic tests, training programs, and timely resolution of audit findings. Further, addressing program design issues that are a factor in causing improper payments may be an effective preventive strategy to be considered. Upfront eligibility validation through data sharing. Data sharing allows entities that make payments—to contractors, vendors, participants in benefit programs, and others—to compare information from different sources to help ensure that payments are appropriate. When effectively implemented, data sharing can be particularly useful in confirming initial or continuing eligibility of participants in benefit programs and in identifying improper payments that have already been made. Analyses and reporting on the extent to which agencies are participating in data sharing activities, and additional data sharing efforts that agencies are currently pursuing or would like to pursue is another important element needed to advance the federal government’s efforts to reduce improper payments. For example, the Department of Labor (Labor) reported that its Unemployment Insurance Program utilizes HHS’s National Directory of New Hires Databaseto individuals who claim benefits after returning to work—the largest single cause of overpayments reported in the program. In June 2011, Labor established the mandatory use of the database for state benefit payment control no later than December 2011. Labor also issued a program letter that included recommended operating procedures for cross-matching activity for National and State Directories of New Hires. to improve the ability to detect overpayments due In another case, to address the issue of inaccuracy of self-reported financial income on applications for student aid, the Department of Education (Education), in conjunction with the Internal Revenue Service (IRS), implemented a 6-month pilot version of an IRS data retrieval tool in January 2010 for its Pell Grant Program. The tool allows student aid applicants and, as needed, parents of applicants, to transfer certain tax return information from the IRS directly to Education’s online application. Education reported that nearly 3.5 million students used the data exchange tool, representing approximately 21 percent of the applications submitted for the 2011 - 2012 academic year. Predictive analytic technologies. The analytic technologies used by HHS’s Centers for Medicare and Medicaid Services (CMS) are examples of preventive techniques that may be useful for other programs to consider. The Small Business Jobs Act of 2010 requires CMS to use predictive modeling and other analytic techniques—known as predictive analytic technologies—both to identify and to prevent improper payments under the Medicare fee-for-service program.technologies will be used to analyze and identify Medicare provider networks, billing patterns, and beneficiary utilization patterns and detect those that represent a high risk of fraudulent activity. Through such analysis, unusual or suspicious patterns or abnormalities can be identified and used to prioritize additional review of suspicious transactions before payment is made. The legislation required that contractors selected begin using these technologies on July 1, 2011, in the 10 states identified by CMS as having the highest risk of fraud, waste, or abuse in Medicare fee- for-service payments. Rather than focusing on the 10 states, CMS These predictive analytic contractors began using these technologies to screen all fee-for-service claims nationwide prior to payment as of June 30, 2011, through CMS’s new Fraud Prevention System. Training programs for providers, staff, and beneficiaries. Training can be a key element in any effort to prevent improper payments from occurring. This can include both training staff on how to prevent and detect improper payments and training providers or beneficiaries on program requirements. For example, the Medicaid Integrity Institute, an initiative of CMS’s Medicaid Integrity Group (MIG), trains state-level staff and facilitates networking by sponsoring free workshops for states. In addition, the MIG sponsors education programs for providers and beneficiaries, such as for pharmacy providers, to promote best prescribing practices and appropriate prescribing guidelines based on Food and Drug Administration labeling, potentially reducing improper payments. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C: Nov. 1999). inconsistent program requirements, such as eligibility criteria and requirements for provider enrollment, contribute to improper payments would lend insight to developing effective strategies for enhancing compliance and may identify opportunities for streamlining or changing eligibility or other program requirements. Although strong preventive controls remain the frontline defense against improper payments, agencies also need effective detection techniques to quickly identify and recover those overpayments that do occur. Detection activities play a significant role not only in identifying improper payments, but also in providing data on why these payments were made and, in turn, highlighting areas that need strengthened prevention controls. The following are examples of key detection techniques to be considered. Data mining. Data mining is a computer-based control activity that analyzes diverse data for relationships that have not previously been discovered. The central repository of data commonly used to perform data mining is called a data warehouse. Data warehouses store tables of historical and current information that are logically grouped. As a tool in managing improper payments, applying data mining to a data warehouse allows an organization to efficiently query the system to identify potential improper payments, such as multiple payments for an individual invoice to an individual recipient on a certain date, or to the same address. For example, in the Medicare and Medicaid program, data on claims are stored in geographically disbursed systems and databases and are not readily available to CMS’s program integrity analysts. CMS has been working for most of the past decade to consolidate program integrity data and analytical tools for detecting fraud, waste, and abuse. The agency’s efforts led to the initiation of the Integrated Data Repository (IDR) program, which is intended to provide CMS and its program integrity contractors with a centralized source that contains Medicaid and Medicare data from the many disparate and dispersed legacy systems and databases. CMS subsequently developed the One Program Integrity (One PI) program, a web-based portal and set of analytical tools by which these data can be accessed and analyzed to help identify cases of fraud, waste, and abuse based on patterns of paid claims. Recovery auditing. While internal control should be maintained to help prevent improper payments, recovery auditing is used to identify and recover overpayments. The Tax Relief and Health Care Act of 2006 required CMS to implement a national Medicare recovery audit contractor (RAC) program by January 1, 2010. In fiscal year 2011, HHS reported that the Medicare Fee-for-Service recovery audit program identified $961 million in overpayments and recovered $797 million nationwide. Further, the Medicaid RAC program was established by the Patient Protection and Affordable Care Act. Each state must contract with a RAC, which is tasked with identifying and recovering Medicaid overpayments and identifying underpayments. The final regulations indicated that state Medicaid RACs were to be implemented by January 1, 2012. Similar to the Medicare RACs, Medicaid RACs will be paid on a contingency fee basis—a percentage of any recovered overpayments plus incentive payments for the detection of underpayments. It is important to note that some agencies have reported statutory or regulatory barriers that affect their ability to pursue recovery auditing. For example, in fiscal year 2011, the Office of Personnel Management (OPM) reported that it faces regulatory barriers that restrict its ability to recover overpayments for its Retirement Program. OPM reported that based on current law and Treasury’s regulations, financial institutions are barred from providing OPM with the information necessary to recover various overpayments. Only the Social Security Administration, Railroad Retirement Board, and the Department of Veterans’ Affairs may receive the information necessary to identify the withdrawer to attempt to recover the overpayments because those agencies are the only ones named in the law to receive that type of information from financial institutions. According to OPM, Treasury has drafted language to address the issue and is working to publish a notice of proposed rulemaking to amend its regulation. In another instance, USDA reported that Section 281 of the Department of Agriculture Reorganization Act of 1994 precluded the use of recovery auditing techniques because Section 281 provides that 90 days after the decision of a state, county, or an area committee is final, no action may be taken to recover the amounts found to have been erroneously disbursed as a result of the decision unless the participant had reason to believe that the decision was erroneous. This statute is commonly referred to as the Finality Rule. Federal-state incentives. Another area for further exploration is the broader use of incentives for states to implement effective preventive and detective controls. Agencies have applied limited incentives and penalties for encouraging improved state administration to reduce improper payments. Incentives and penalties can be helpful to create management reform and to ensure adherence to performance standards. Chairman Platts and Ranking Member Towns, this completes my prepared statement. I would be happy to respond to any questions that you or other members of the subcommittee may have at this time. For more information regarding this testimony, please contact Beryl H. Davis, Director, Financial Management and Assurance, at (202) 512-2623 or by e-mail at DavisBH@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony included Jack Warner, Assistant Director; W. Tyler Benson; Francine Delvecchio; Crystal Lazcano; Kerry Porter; and Carrie Wehrly. For our report on the U.S. government’s consolidated financial statements for fiscal year 2011, see U.S. Department of the Treasury. 2011 Financial Report of the United States Government. Washington, D.C.: December 23, 2011, pp. 211-231. Medicaid Program Integrity: Expanded Federal Role Presents Challenges to and Opportunities for Assisting States. GAO-12-288T. Washington, D.C.: December 7, 2011 DOD Financial Management: Weaknesses in Controls over the Use of Public Funds and Related Improper Payments. GAO-11-950T. Washington, D.C.: September 22, 2011. Improper Payments: Reported Medicare Estimates and Key Remediation Strategies. GAO-11-842T. Washington, D.C.: July 28, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Needs to Ensure More Widespread Use. GAO-11-475. Washington, D.C.: July 12, 2011 Improper Payments: Recent Efforts to Address Improper Payments and Remaining Challenges. GAO-11-575T. Washington, D.C.: April 15, 2011. Improper Payments: Status of Fiscal Year 2010 Federal Improper Payments Reporting. GAO-11-443R. Washington, D.C.: March 25, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Improper Payments: Significant Improvements Needed in DOD’s Efforts to Address Improper Payment and Recovery Auditing Requirements. GAO-09-442. Washington, D.C.: July 29, 2009. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. Improper Payments: Status of Agencies’ Efforts to Address Improper Payment and Recovery Auditing Requirements. GAO-08-438T. Washington, D.C.: January 31, 2008. Improper Payments: Federal Executive Branch Agencies’ Fiscal Year 2007 Improper Payment Estimate Reporting. GAO-08-377R. Washington, D.C.: January 23, 2008. Improper Payments: Weaknesses in USAID’s and NASA’s Implementation of the Improper Payments Information Act and Recovery Auditing. GAO-08-77. Washington, D.C.: November 9, 2007. Improper Payments: Federal and State Coordination Needed to Report National Improper Payment Estimates on Federal Programs. GAO-06-347. Washington, D.C.: April 14, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the past decade, GAO has issued numerous reports and testimonies highlighting improper payment issues across the federal government as well as at specific agencies. Fiscal year 2011 marked the eighth year of implementation of the Improper Payments Information Act of 2002 (IPIA), as well as the first year of implementation for the Improper Payments Elimination and Recovery Act of 2010 (IPERA). IPIA requires executive branch agencies to annually identify programs and activities susceptible to significant improper payments, estimate the amount of improper payments for such programs and activities, and report these estimates along with actions taken to reduce them. IPERA, enacted July 22, 2010, amended IPIA and expanded requirements for recovering overpayments across a broad range of federal programs. This testimony addresses (1) federal agencies’ reported progress in estimating and reducing improper payments, (2) remaining challenges in meeting current requirements to estimate and report improper payments and (3) actions that can be taken to move forward with improper payment reduction strategies. This testimony is primarily based on prior GAO reports, including GAO’s fiscal year 2011 audit of the Financial Report of the United States Government. The testimony also includes improper payment information recently presented in federal entities’ fiscal year 2011 financial reports. Federal agencies reported an estimated $115.3 billion in improper payments in fiscal year 2011, a decrease of $5.3 billion from the prior year reported estimate of $120.6 billion. The $115.3 billion estimate was attributable to 79 programs spread among 17 agencies. Ten programs accounted for about $107 billion or 93 percent of the total estimated improper payments agencies reported for fiscal year 2011. The reported decrease in fiscal year 2011 was primarily related to three programs—decreases in program outlays for the Department of Labor’s Unemployment Insurance program, and decreases in reported error rates for the Earned Income Tax Credit program and the Medicare Advantage program. Further, the Office of Management and Budget reported that agencies recaptured $1.25 billion in improper payments to contractors, vendors, and healthcare providers in fiscal year 2011. Over half of this amount, $797 million, can be attributed to the Medicare Recovery Audit Contractor program which identifies Medicare overpayments and underpayments. The federal government continues to face challenges in determining the full extent of improper payments. Some agencies have not yet reported estimates for all risk-susceptible programs, such as the Department of Health and Human Services’ Temporary Assistance for Needy Families program. Internal control weaknesses continue to exist, heightening the risk of improper payments. Some agencies’ estimating methodologies need to be refined. For example, two Department of Defense commercial payment programs were not included in the total governmentwide error rate because the estimation methodologies for these programs were still under development. A number of actions are under way across government to help advance improper payment reduction goals. These actions and future initiatives will be needed to enhance federal government efforts to reduce improper payments. For example, Additional information and analysis on the root causes of improper payment estimates would help agencies target effective corrective actions and implement preventive measures. Although agencies were required to report the root causes of improper payments in three categories beginning in fiscal year 2011, of the 79 programs with improper payment estimates for fiscal year 2011, 42 programs reported the root cause information using the required categories. In addition, because the three categories are general, additional analysis is critical to understanding the root causes. Implementing strong preventive controls can help defend against improper payments, increasing public confidence and avoiding the difficult “pay and chase” aspects of recovering improper payments. Preventive controls involve activities such as upfront validation of eligibility using electronic data matching, predictive analytic tests, and training programs. Further, addressing program design issues, such as complex eligibility requirements, may also warrant further consideration. Effective detection techniques to quickly identify and recover improper payments are also important to a successful reduction strategy. Detection activities include data mining and recovery audits. Another area for further exploration is the broader use of incentives to encourage and support states in efforts to implement effective preventive and detective controls.
FDA conducts a variety of activities pursuant to its mission to protect the public health. To carry out these functions, FDA is organized into product centers—which regulate products including human and veterinary drugs, vaccines and other biological products, medical devices, most food, and tobacco—a research center, which provides scientific technology, training, and technical expertise, and offices that carry out various functions of the agency. FDA’s response to the contaminated heparin crisis involved a number of FDA centers and offices. FDA’s activities related to its mission and relevant to the heparin crisis include the following: Overseeing drug and device firms. FDA conducts oversight activities such as inspections and investigations of foreign and domestic manufacturing firms, including their suppliers, to determine compliance with good manufacturing practices (GMP), or sampling of imported products. FDA also takes regulatory actions against firms, when appropriate, by issuing warning letters, detaining imports, or recommending seizure of products. Collaborating with USP. FDA collaborates with USP to help ensure the safety and quality of drug products. Under the Federal Food, Drug, and Cosmetic Act, prescription and over-the-counter drugs sold in the United States generally must comply with quality standards published in the USP- National Formulary. USP sets standards for drug quality, purity, and strength, as well as the tests or methods used to assess quality, purity, and strength. Products that do not meet USP standards using the specified methods are considered adulterated by law. Collaborating with foreign regulatory agencies. FDA has confidentiality commitments to facilitate information sharing with regulatory agencies in 19 countries, including Australia, Canada, France, Germany, and Japan. FDA does not have a confidentiality commitment with China; however, FDA negotiated two memorandums of agreement with China in 2007 aimed at improving the safety of Chinese drug products and medical devices, and food exported to the United States. In recent years, FDA has opened offices abroad, including in India, Europe, Latin America, and China. FDA opened its office in China in November 2008, with posts in Beijing, Shanghai, and Guangzhou. An FDA official said the primary mission of these offices is to help gather more information on the safety and quality of products that are being exported to the United States so that FDA can make better-informed decisions about which products to permit to enter the United States. Monitoring adverse events. FDA monitors drug and device safety through its postmarketing surveillance program. FDA’s Adverse Event Reporting System (AERS) is a database that supports the agency’s postmarketing safety surveillance program for all approved drug and therapeutic biologic products. FDA uses AERS to record adverse event reports and to monitor for new adverse events and medication errors associated with drug products marketed in the United States. FDA uses its Manufacturer and User Facility Device Experience (MAUDE) database to record and monitor reports of adverse events related to medical devices. Communicating with the public. FDA communicates information to the public through a variety of means, including press releases, media briefings, public health advisories, and news interviews. FDA also disseminates information on the agency’s Web site, including regulatory information, scientific research, and educational materials. Responding to emergencies. To respond to emergencies or crises, FDA uses a plan to assist the agency in organizing a coordinated response to events involving FDA-regulated products as well as other identified public health emergencies. At the time of the heparin crisis, FDA had its ERP in place, which was issued in February 2005. Working with external entities. When necessary, FDA enters into working relationships with external entities, such as scientists from universities or drug firms, to assist the agency with matters such as the review of research and product applications. For example, scientists serving on advisory committees review and make recommendations on drug applications, and scientists from universities provide expertise in specific scientific disciplines and enhance the science base of the agency through FDA’s Science Advisor Program. FDA has guidance in place for working with external entities in certain situations, including a guide called The Leveraging Handbook. This handbook references statutes and regulations that apply to the behavior of individual FDA employees. It also contains guidance applicable to FDA as an agency to prevent public perception concerns and demonstrate that the agency is worthy of public trust in carrying out its activities. In addition, other laws, regulations, and policies may apply to FDA’s work with external entities, depending on the nature of the arrangements. FDA, like other federal agencies, generally may not accept voluntary services, which may give rise to claims for payment for which funds are not available. However, with a written agreement that services are provided with no expectation of payment, FDA may accept uncompensated services from external entities. Heparin is a medically necessary drug that acts as an anticoagulant; that is, it prevents the formation of blood clots in the veins, arteries, and lungs (see app. I for technical information on heparin and research related to contaminated heparin). The heparin supply chain starts with a raw source material, primarily derived from the intestines of pigs, that is processed into crude heparin. China is the primary source of crude heparin for U.S. manufacturers because of its abundant pig supply. Thousands of small pig farms in Chinese villages extract and process pig intestines in small workshops called casing facilities. Consolidators collect different batches of heparin, typically called heparin lots, from various workshops and combine them into single heparin lots. The consolidators sell the crude heparin lots to manufacturers, who further refine the crude heparin into heparin API, the active ingredient used in heparin drug products and devices. More than half of the finished heparin products in the United States and globally are made from Chinese-sourced materials. There are seven pharmaceutical companies that manufacture and distribute heparin products in the United States. At the time of the crisis, Baxter and APP Pharmaceuticals (APP) were the two largest manufacturers of heparin in the United States, with each company accounting for about half of the total U.S. heparin supply. Both companies received the majority of their crude heparin from Chinese sources. Several FDA centers and offices were involved in the response to the contaminated heparin crisis. Some of these centers and offices and their relevant functions are described below (see app. II for a complete list of FDA centers, offices, and divisions that were involved in the heparin crisis): Office of the Commissioner—leads FDA and implements FDA’s mission. Office of Crisis Management (OCM)—develops crisis management policies, leads and coordinates the agency’s development and updating of emergency preparedness and response plans, including FDA’s ERP, and coordinates the agency’s emergency response. Office of International Programs—works with agencies and governments to advance public health worldwide. Office of Regulatory Affairs (ORA)—leads inspections of regulated domestic and imported products and domestic and foreign manufacturing facilities, and develops enforcement policies. Center for Drug Evaluation and Research (CDER)—regulates over- the-counter and prescription drugs, including biological therapeutics and generic drugs sold in the United States. Center for Devices and Radiological Health (CDRH)—regulates medical and radiological devices sold in the United States. FDA took several actions during the first half of 2008 to protect the public health in response to the heparin crisis. During that time and afterwards, FDA increased oversight of heparin firms, but sometimes faced limitations in oversight and collaborating with others. FDA also worked with heparin manufacturers to recall contaminated heparin products while ensuring an adequate supply for U.S. consumers. In addition, FDA collaborated with its international regulatory partners to exchange information. Because of limitations related to conducting inspections and investigations of heparin firms in China, FDA could not determine the original source of the heparin contamination. To respond to the heparin crisis, FDA took action related to its responsibility to protect the public health by ensuring the safety and security of the nation’s drug and medical device supplies by taking various actions from January through May 2008. On January 7, 2008, after FDA learned about the severe allergic reactions taking place, the agency initiated an investigation at the dialysis facility where the first observed allergic reactions took place and shared information with CDC. At the same time, FDA contacted a medical device manufacturer since it was initially thought the allergic reactions were in response to a medical device. After FDA learned that the problem possibly was associated with Baxter heparin, on January 9, 2008, the agency began investigations and inspections of heparin drug and device firms. FDA received notification of the first recall of nine lots of Baxter heparin products, which took place on January 17, 2008, and began work with this drug firm to learn more about the problem with its heparin. By January 23, FDA learned that Baxter received its heparin API from Scientific Protein Laboratories’ (SPL) establishments in Wisconsin and China. In early February 2008, the agency worked to postpone an expanded recall of Baxter’s heparin products so it could consult with APP to ensure that APP could supply the U.S. heparin market and mitigate a potential heparin shortage. The second recall, which included all lots of Baxter’s single and multidose vial heparin products, took place on February 29, 2008. FDA also facilitated recalls of heparin-containing medical devices with heparin device firms. As the crisis progressed, FDA took additional actions in February and March 2008. By late February, FDA could distinguish contaminated heparin from uncontaminated heparin using preliminary testing methods and continued working to develop these methods. During that month, FDA also formed an internal task force to coordinate the agency’s response to the heparin crisis and reached out to external scientists to assist the agency in identifying the unknown contaminant and to develop tests to detect this contaminant. On March 5, 2008, FDA identified the type of contaminant in suspect heparin lots and by March 6, it shared newly developed testing methods that could differentiate contaminated heparin from uncontaminated heparin. Some other countries also found contamination in their heparin supplies. Later that month, on March 17, FDA identified oversulfated chondroitin sulfate (OSCS) as a contaminant in the heparin associated with adverse events in the United States. Additionally, because the majority of finished heparin products in the United States and globally are made with ingredients from China, FDA worked to ensure the safety of heparin imports. Throughout the crisis, FDA also provided information about the crisis to a variety of audiences, including the press, physicians, and medical facilities. By April 2008, the agency determined that the number of adverse events involving heparin had returned to precrisis levels. FDA held an international heparin conference on April 17, and 18, 2008 to exchange information with its foreign regulatory counterparts. FDA’s task force continued to meet until May 27, 2008, when it was determined that the crisis was over. Figure 1 shows the timeline of key events in the heparin crisis. In response to the heparin crisis, FDA increased its oversight activities of heparin firms by increasing its inspections, investigations, and monitoring efforts. Inspections. During and after the crisis, FDA conducted an increased number of domestic and foreign heparin-related inspections of drug and device firms compared with the number of inspections prior to the crisis (see fig. 2). In particular, FDA increased its frequency of inspections of Chinese firms associated with OSCS contamination in the United States. In the 20-month period prior to the crisis, FDA did not conduct any inspections of Chinese heparin firms. In contrast, 11 Chinese firms constituted 14 of the 21 heparin-related foreign inspections conducted by FDA during and after the crisis. Of the Chinese firms that FDA inspected, only 2 had been inspected prior to the contaminated heparin crisis. FDA officials said that there were and continue to be significant legal and practical challenges to conducting inspections of crude heparin manufacturers and the casing facilities that supply them, such as the limits on FDA’s ability to require foreign establishments to allow the agency to inspect their facilities, the large number of and incompleteness of FDA’s information on the casing facilities, and the expenses associated with conducting foreign inspections. For these reasons, according to FDA officials, FDA focused on firms’ responsibilities to ensure that they could trace their crude heparin back to qualified suppliers that produce an uncontaminated product. Furthermore, according to officials, during inspections FDA inspectors requested that firms conduct their own investigations of any heparin products for which they received complaints or that did not meet specifications. Investigations. In addition to inspections, FDA conducted investigations at U.S. health care facilities and device firms, domestic drug firms, and a foreign drug firm. FDA data show that the agency conducted at least 37 domestic and 1 foreign investigations related to heparin between January 2008 and June 2009, with individual investigations sometimes consistin FDA visits to multiple facilities, such as a drug firm and a health care provider. The reasons for these investigations included, for example, obtaining heparin samples, collecting information on firms’ crude and heparin API suppliers, following up on patient adverse event reports and the status of product recalls, and witnessing the destruction of contaminated heparin. Monitoring imports. Beginning in February 2008, FDA began monitoring ning heparin products offered for import by physically examining and detai products to help ensure that additional contaminated heparin did not reach U.S. consumers. The agency initially issued an import bulletin in late February 2008 instructing FDA staff to assess the admissibility of hep products offered for import, and then replaced it with a plan in mi d- March 2008 to physically sample and test these products for OSC S contamination. This testing plan, which provided more detailed instructions than the import bulletin, required that FDA test all imported heparin API, and other imported heparin products, on a case-by-case ba for contamination upon arrival at the U.S. border unless U.S. firms had already committed to testing their imported heparin products using FDA’s sis, newly developed testing methods. According to FDA data, by the end of June 2010, FDA had collected 141 heparin samples. Three of these samples were contaminated with OSCS, including 1 detected after the crisis period ended in May 2008. During and after the crisis, FDA also added a total of seven heparin-related establishments associated with OSCS contamination to an existing import alert for drug manufacturers found to be in violation of GMPs, which enabled the agency to detain heparin imports from these establishments without physically examining them. FDA officials said that these heparin establishments appeared to stop shipping heparin to the United States after being added to this import alert. In some instances, FDA took further action as a result of its inspections and import testing. Between April 2008 and April 2009, the agency issued three warning letters and two untitled letters related to the heparin crisis to drug firms. The agency also added the seven heparin establishments to the import alert described previously as a direct result of various factors, including deficiencies observed during inspections, detection of contaminated heparin during import testing, and FDA’s determination that establishments were not adequately safeguarding their heparin supply chains. Additionally, FDA initiated a seizure of heparin products from one firm after the agency determined that the firm’s efforts to voluntarily recall contaminated heparin products identified during an inspection were inadequate. However, FDA officials believed that they had limited authority to take action when they encountered refusals, either by the firm or by the Chinese government, to permit a full inspection of some Chinese firms. In two instances, Chinese crude heparin consolidators refused to provide FDA full access during limited inspections—in particular, one consolidator refused to let FDA inspectors walk through its laboratory and refused FDA access to its records. FDA classified both limited inspections as “no action indicated” (NAI) and did not attempt to reinspect the facilities, document any objectionable conditions, or place the firms on import alert. FDA officials provided us with various reasons why FDA classified these limited inspections as NAI and did not pursue these firms further despite encountering refusals. FDA officials told us that the agency focused its efforts on the API manufacturers that these firms supplied. Officials also told us that at least one of these firms was not shipping crude heparin directly to the United States; however, FDA’s import data show that both firms shipped crude heparin directly to the United States in 2006, which, according to retrospective testing conducted in 2008 by SPL, Baxter’s API manufacturer, is when OSCS contamination of SPL’s heparin supply was first detected. Additionally, officials told us that no GMP violations were observed during these limited inspections, but acknowledged in congressional testimony that inspectors were not able to observe the laboratory of one of the firms. Overall, FDA officials told us that in both instances the agency did not have sufficient evidence to put the two consolidators on import alert and that, with some exceptions, a firm’s refusal to allow for a complete inspection is not itself one of the bases for product detention at the U.S. border. Additionally, FDA learned that China’s State Food and Drug Administration had sealed some firms’ heparin and had instructed the firms not to open these seals. This prevented at least one firm from conclusively determining which of its crude suppliers were associated with OSCS contamination, which FDA learned of during a preapproval inspection of this particular firm. According to FDA officials, FDA was concerned that this firm was unable to complete its investigation of suppliers and requested a reinspection of the firm. From the reinspection, which took place approximately 1 year later, the agency determined that the firm had implemented testing methods to detect OSCS contamination, communicated its expectations and requirements to its suppliers, and increased the frequency of its supplier audits. FDA also learned during the reinspection that the firm had completed its testing, which resulted in the permanent disqualification of two of its suppliers. FDA officials said that they are continuing to take steps to improve the quality of drugs manufactured outside of the United States. In addition to creating and staffing FDA posts overseas, FDA officials told us that the agency has established a cadre of FDA’s U.S.-based investigators to conduct foreign drug inspections throughout the world as needed. FDA is also increasing the size of its cadre of the highest-certified drug inspectors to assist with foreign inspections, and increasing the number of translators it brings on foreign inspections, especially to China. FDA officials told us that the agency continues to emphasize the responsibility of industry to ensure the safety and security of its supply chain, including placing emphasis on supply chain traceability during foreign drug inspections. In addition, according to officials, FDA also continues to revise its inspection and surveillance programs to focus on higher-risk facilities and products. For example, officials told us that in fiscal year 2010 the agency developed and used a risk-based model and other information to focus its annual surveillance sampling program—a long-standing FDA program to sample drug components offered for import, which changes focus annually—on APIs potentially susceptible to economically motivated adulteration. Beginning in January 2008 when the first recalls of contaminated products occurred, FDA worked with manufacturers to ensure an adequate supply of uncontaminated heparin for the U.S. market. Weeks after Baxter initiated a recall of specific heparin lots associated with adverse reactions in patients, the company told FDA it wanted to recall almost all of its heparin products because the number of adverse reactions associated with its heparin continued to increase. FDA officials said they recognized that a large-scale recall could pose risks to U.S. patients if the remaining supply was not adequate to meet facilities’ and providers’ needs for heparin. Consequently, FDA engaged in discussions with APP, the other main U.S. heparin manufacturer, to determine the amount of heparin it had available and to determine if and when it could increase its heparin production to supply almost the entire U.S. market. FDA and APP officials told us that APP’s ability to increase production was initially limited and that FDA and APP worked together to increase APP’s production capacity; for example, in July 2008, APP obtained permission from FDA to apply for an additional manufacturing facility—which FDA approved in October 2008—using a process that, according to APP officials, decreased FDA’s approval time by months and allowed APP to begin releasing heparin manufactured at the alternate site and subsequently list it as an approved facility with the agency. During this time, FDA worked with Baxter to manage the risks of the contaminated heparin that remained on the U.S. market and postpone the expanded recall of almost all Baxter heparin products until the agency was sure that APP could increase its heparin production to meet the needs of U.S. patients, thus avoiding a shortage of a medically necessary drug. According to FDA officials, FDA and Baxter worked together to develop a risk management plan, and FDA issued a public health advisory to inform the public of serious adverse events and recommend measures—such as using the lowest necessary dose, administering the heparin as slowly as acceptable, and monitoring patients closely for adverse events—to help minimize these risks in instances where Baxter heparin was the only product available. FDA continued monitoring for the possibility of a heparin shortage even after APP told FDA it could increase production. FDA continued to be concerned about the adequacy of the U.S. heparin supply in the summer of 2008 due to a shortage of raw materials in China and issues APP faced with its supply chain. The agency also continued to work with manufacturers on product recalls. Overall, FDA worked with 15 other drug and device firms to recall at least 11 drug products and 72 medical device products as a result of the heparin crisis. FDA reached out to its international regulatory partners during the crisis to exchange information about contaminated heparin, but was ultimately unable to identify the original source of contamination. In early February 2008, prior to FDA’s public announcement about the adverse events seen in the United States, FDA told its partners—which included regulatory agencies in 17 countries, the European Commission, and the European pharmaceutical regulatory agency—about these adverse events and asked them to share information on any similar events related to heparin. By March 2008, FDA was aware of at least 10 countries, including the United States, that had found OSCS contamination in their heparin supply. However, only 1 other country, Germany, also observed an increase in heparin-associated adverse events. Through its communications with other countries, FDA learned that some Chinese manufacturers associated with contamination in these countries also supplied heparin to the U.S. market. Notably, one of these manufacturers was the primary supplier for APP, the U.S. firm that supplied almost the entire U.S. heparin market after Baxter recalled its products. In this instance, FDA responded to this information by conducting an investigation of the manufacturer and as a result concluded that the heparin distributed by APP in the United States was not contaminated. FDA also collaborated with the Chinese government during the crisis, though FDA was ultimately unable to determine the original source of contamination. According to FDA officials, FDA’s preliminary investigation concluded that contamination did not take place in the United States. As a result, FDA requested jurisdiction from the Chinese government in order to conduct a criminal investigation in China to determine the source of contamination. However, Chinese officials would not grant this request and denied that contamination took place in China. Through retrospective testing of retained heparin samples conducted by firms in 2008, FDA learned that OSCS-contaminated crude heparin had been introduced into the global heparin supply as early as May 2006. FDA investigators believe that OSCS was increasingly added to heparin by Chinese establishments that manufacture crude heparin so that the establishments could cut costs. Although unable to collaborate with the Chinese government in a formal criminal investigation, FDA has continued to collaborate with its international partners to avoid similar crises in the future. For example, FDA organized an international conference in April 2008 during which regulators and academics from 10 additional countries around the world, including China, along with the standard-setting entities for pharmaceuticals in the United States and Europe, shared information on their experiences with contaminated heparin during the crisis and discussed potential steps to prevent future contamination incidents. The agency also participates in the API Pilot Program with the regulatory bodies of Europe and Australia. According to FDA officials, drug regulatory agencies in this program—which began after the heparin crisis—share and obtain information about API inspections they conduct around the world to better leverage their inspection resources. Officials said that FDA’s establishment of overseas offices will also help facilitate collaboration between FDA and foreign regulatory agencies. FDA coordinated internal and external resources to respond to the contaminated heparin crisis, but did not adequately address risks related to working with certain external entities with ties to heparin firms. Not adequately addressing these risks could have affected the public’s confidence in FDA’s response efforts and in its other activities related to the regulation of heparin products and also left FDA open to claims for payment for services that these external entities provided to FDA on a voluntary basis. In responding to the heparin crisis, FDA coordinated response efforts in accordance with its ERP and developed a new Emergency Operations Plan (EOP) to guide its response to future crises. According to FDA officials, OCM initially coordinated the agency’s response efforts, which included many of FDA’s offices and centers. FDA officials said the total number of centers, offices, and divisions within the agency that were involved in responding to the contaminated heparin crisis was over 40 (see app. II for a complete list of FDA centers, offices, and divisions that were involved in the heparin crisis). On February 8, 2008, CDC reported that the problem was with the heparin drug product and not with medical devices as was originally thought. Once this link was made, FDA officials determined that CDER would be best equipped to lead scientific efforts to identify the contaminant. According to FDA officials, there was no formal transition of leadership from OCM to CDER, but once the situation was discovered to be largely a drug issue, CDER increased its involvement and took over the role of lead coordinator from OCM. Once CDER assumed this responsibility, FDA no longer had an agency-level entity responsible for coordinating response efforts, and CDER coordinated the multiple centers and offices within the agency that continued to be involved in the crisis. CDER officials created a task force to coordinate the agency’s response efforts across multiple centers, offices, and divisions. CDER’s Heparin Task Force was initially composed of mostly CDER officials but expanded to involve some other FDA offices. The task force initially met daily and then weekly from February 25, 2008, through May 27, 2008. An FDA official said that information from the task force’s meetings was dispersed to relevant staff throughout FDA through CDER’s e-mail distribution list, which included over 200 FDA officials. OCM continued to be involved with CDER’s task force by participating in task force meetings, but it did not have a role in the ongoing coordination of the agency’s efforts to respond to the heparin crisis. After the crisis, FDA conducted some lessons-learned meetings to focus on difficulties that occurred during the agency’s response. Documentation from these meetings shows that agency officials believed that FDA staff showed remarkable dedication during the crisis and that the agency was successful in removing contaminated products from and preventing the introduction of further contaminated product into the market place. However, these documents also show that there were some areas in which the agency’s response could have been improved. Specifically, these documents indicate that the lack of details in the ERP and the absence of coordination at the agency level for the duration of the crisis may have led to some process delays and difficulty with internal and external communication. For example, CDER officials stated in a lessons-learned document that the agency’s response to future crises could benefit from guidance that clearly delineates who should lead the agency’s efforts during a crisis. According to this document, CDER officials said that it was not clear who, OCM or CDER, should lead the agency’s efforts, since the ERP was not specific about who should coordinate the agency’s response during a crisis. Additionally, when leadership transitioned to CDER, center officials had to spend time determining leadership roles within the center. In another lessons-learned document, CDRH officials said that external communication was sometimes complicated by CDER being the lead office. Specifically, issues related to heparin-containing medical devices were not always included in CDER-led task force discussions and were consequently often not addressed in CDER’s communications with the public, other countries, or industry. FDA officials told us that the agency has been working since October 2008 on the development of the new EOP, which is intended to address some of the difficulties encountered during previous crises, including lack of specific details on agency coordination. According to FDA officials, the new EOP was finalized in September 2010 and replaces the agency’s existing ERP. FDA officials also told us that the new EOP is based on guidance from the National Response Framework and will incorporate principles of emergency operation—including the National Incident Management System and the Incident Command Structure—that are designed to help agencies better coordinate efforts in the event of an emergency. According to these officials, the EOP will be more detailed in terms of coordination within the agency and clearer about roles and responsibilities of centers and offices in any emergency, large or small, that the agency may face. For example, the new EOP is to contain a section devoted to coordination at the agency level within FDA’s headquarters. This section will offer guidance and a specific coordination structure that agency officials can use during an incident to help ensure that response resources and capabilities from multiple centers and offices within the agency are well organized. The EOP is to also include two new coordinator positions—the Agency Incident Coordinator (AIC) and the Agency Executive Group (AEG)—to facilitate agency-level coordination of an incident. According to this official, the role of the AIC will be to manage an incident at the agency level and to serve as a communication bridge between the Commissioner’s Office and staff in the agency’s centers and offices responding to an emergency. The AEG will be a group of senior- level executives at FDA who will provide strategic policy direction and guidance for major emergency response activities. The AEG is expected to approve important policy decisions in consultation with the AIC and the Commissioner of FDA. FDA worked with several external scientists during the heparin crisis, but did not address certain risks that engaging two of these scientists, and additional external entities engaged by one of these scientists, posed to the agency. In February 2008, FDA officials contacted five external scientists, including one who was employed by the agency as a special consultant, for assistance with the heparin crisis, and FDA worked with these scientists for varying time periods. Agency officials told us that they sought the advice of these external scientists because the agency lacked the necessary instrumentation and expertise to identify and develop new testing methods to detect the specific contaminant. According to FDA officials, these external scientists were engaged to provide the agency with technical and factual scientific advice related to the identity of the unknown contaminant and tests to identify this contaminant, and all policy judgments and decisions related to this advice were made by CDER officials. FDA communicated with external scientists frequently during the height of the crisis period and told us that some of these scientists were brought together for at least two in-person meetings to share and discuss their individual findings. All five scientists worked directly with FDA, but they did not all have the same working arrangements with the agency. One of the scientists was a participant in FDA’s Science Advisor Program and was considered an FDA employee. Two of the scientists were employees of a university with which FDA contracted for testing of heparin samples; the university was selected in part because of its close proximity to FDA’s Division of Pharmaceutical Analysis and the availability of advanced instrumentation and staff expertise necessary for testing. The two remaining scientists that FDA contacted in late February were not employees of FDA or FDA contractors. The agency characterized these scientists as volunteers and told us that they had been informally identified by CDER staff as experts in heparin analysis. FDA officials said that these two scientists provided services on an uncompensated basis in response to the oral requests of CDER staff. With FDA’s knowledge, one of these two scientists obtained assistance in his work for FDA from external entities, including a drug development firm and an Italian research institute, also on an uncompensated basis. The two scientists characterized by FDA as volunteers had professional and financial ties to heparin firms. Both served as paid consultants to two of the primary firms associated with contaminated heparin. In addition, one of the scientists was a cofounder and member of the board of directors, as well as an equity interest holder, in a third firm, which, at the time of the crisis, had a pending application for a heparin product before FDA. The agency allowed this scientist to obtain assistance in conducting analytical work to identify the contaminant in heparin from this firm despite its pending application for a heparin product. This drug manufacturer dedicated approximately 30 staff members from its analytical and biology groups for periods ranging from a few weeks to 3 months to assist in the effort to identify the contaminant in heparin. FDA’s internal guidance, The Leveraging Handbook, addresses risks that may be presented in collaborative arrangements with external entities. The handbook cautions FDA employees to weigh certain legal and ethical considerations when entering into partnerships and references rules applicable to the behavior of individual employees, but also identifies other principles, which it characterizes as “institutional ethics.” These prudential considerations are designed to prevent public perception concerns and to demonstrate that the agency has established procedures designed to display that it is worthy of public trust. Among other things, the guidance cautions staff to consider the ethical implications of accepting gifts for the agency from external entities, stating that the agency should be judicious in accepting gifts to avoid the appearance that its programs or operations may be compromised. Specifically, staff are to balance the importance of a potential gift to the agency against the potential appearance problems that may be caused by acceptance of the gift. Steps to be considered in the balancing test include determining if accepting the gift would reflect unfavorably on the agency’s ability to carry out its responsibilities in a fair and objective manner and whether the acceptance of a gift would compromise the integrity of, or the appearance of the integrity of, a program or official. Staff are also asked to determine the value to the agency of accepting the gift and the extent to which it will enable the agency to accomplish its mission. Further, The Leveraging Handbook instructs staff to consider the nature and sensitivity of matters pending before the agency that would affect the interests of the gift donor and to weigh the agency’s interest in accepting the gift against any actual or apparent conflict of interest. Finally, the guidance provides for consideration of whether the gift would be from a prohibited source if the gift were made to an individual employee and calls for gifts from prohibited sources to be subject to higher scrutiny. FDA officials were aware of the scientists’ ties to heparin manufacturers, but did not take adequate steps to consider whether these relationships exposed the agency to the risks described in its guidance or to address these risks before engaging them. FDA officials told us that they believed that there was insufficient time to address these ties in the midst of the heparin crisis and that the CDER staff who identified these scientists were confident that they could independently assess the input from these scientists through robust, detailed, and transparent discussions; they said that this would address any appearance problems related to the scientists’ input. FDA officials also emphasized that the agency made all policy judgments and said that they disclosed the work of these scientists to the public through peer-reviewed journal articles in late April, after the specific contaminant in heparin was identified. However, FDA officials told us that they did not take steps before accepting voluntary services of these scientists to assess whether their ties to firms associated with contaminated heparin would compromise the integrity of FDA’s activities, or the appearance of integrity, so as to undermine the public perception of FDA’s management of the heparin crisis. Nor is there evidence that they considered whether the agency’s acceptance of voluntary services from a scientist with an interest in a firm with an application pending before FDA, along with employees of that firm, would compromise, or appear to compromise, the agency’s activities, including its activities related to the approval of heparin products. Moreover, FDA did not fully disclose the existence or extent of these scientists’ interests while they were providing assistance or afterwards. CDER staff did not consult with the Office of Chief Counsel or agency ethics officials about their working arrangements with these two scientists or seek advice as to whether the arrangements were consistent with the agency’s ethics standards. FDA’s acceptance of voluntary services in connection with the heparin crisis also exposed the agency to the risk of claims for payment for the services provided. Federal agencies are generally prohibited from accepting voluntary services because of the risk of claims associated with them. The statutory provision barring the acceptance of these services is best understood in the context of the preceding statutory provision, which prohibits agencies from incurring obligations in excess of their appropriations or before such appropriations are made. The fundamental purpose of the voluntary services prohibition is to preserve the integrity of the appropriations process by preventing agencies from effectively incurring obligations in excess of or in advance of appropriations by accepting voluntary services with the expectation that Congress will recognize a “moral obligation” to pay for the services rendered. Consistent with this underlying purpose, voluntary services have been defined as those that are not rendered under a prior contract or advance agreement that they will be gratuitous and are, therefore, likely to form the basis of future claims against the government. However, the acceptance of services that are offered as gratuitous—that is, with no expectation of payment—with a record made of that fact, does not violate the voluntary services prohibition. Such services do not give rise to any obligation or financial liability and therefore do not expose an agency to the risk of claims for payment. FDA officials told us that the agency was authorized to accept voluntary services during the heparin crisis under an emergency exception and therefore was not required to obtain a written agreement that the services were offered with no expectation of payment. The statute provides an exception for emergencies involving the safety of human life or the protection of property, which the statute defines as circumstances involving an imminent threat to the safety of human life or the protection of property. FDA officials explained that the sharp increase in reports of severe allergic reactions to heparin in late January 2008 signaled a public health emergency requiring the agency to quickly identify and assemble the scientific expertise of those who could help identify the source of the crisis in order to protect patients and ensure the safety of a medically necessary drug. By late February 2008, FDA had developed a screening method to distinguish contaminated heparin from uncontaminated heparin, but had not identified the precise contaminant or developed specific methods of testing for this specific contaminant, and obtained the voluntary services of additional scientists for this purpose. While the existence of an emergency would provide a legal basis for agencies to accept voluntary services, it would not protect them from subsequent claims for payment. To the contrary, the acceptance of services under the emergency exception would give rise to obligations— that is, financial liabilities—for which claims for payment could be made. As noted above, however, agencies accepting services in an emergency or otherwise may guard against claims for compensation by establishing that the services are gratuitous and, as such, do not give rise to any obligation or financial liability on the part of the government. This is accomplished by obtaining a written agreement from those providing services that they will receive no compensation and waive any future claims against the government for their services. FDA did not take steps to establish that the services provided by two of the external scientists, as well as the services obtained by one of those scientists from two other entities, imposed no obligation or financial liability and, in this respect, exposed the agency to the risk that claims for compensation would be made for which funds were not available. Regardless of whether the circumstances that existed when FDA contacted these scientists constituted an emergency, they did not preclude the agency from addressing this risk. To the extent that time was of the essence, a letter from those providing services to the agency would have been sufficient; there is no detailed or prescribed form for the provision of gratuitous services. In addition, the provision of services was not unexpected—the agency requested and discussed the services provided by the selected scientists as part of the ongoing process of resolving the heparin crisis. By late February 2008, the agency had overseen a recall of heparin products and determined how to distinguish contaminated heparin from uncontaminated heparin using a preliminary screening method. FDA requested the services of the two scientists to help it identify the specific contaminant and develop appropriate testing methodologies for its detection, and these scientists provided analyses and opinions to FDA over a period of several weeks. FDA officials told us that determining the precise identity of the contaminant and developing appropriate testing methodologies were necessary to resolve the crisis and that the services provided and arranged for by the two scientists were critical for doing so. However, those facts do not explain why FDA did not take appropriate steps to protect the agency from the financial exposure arising from services that it had both requested and accepted. Voluntary services may be accepted where otherwise authorized by law, and FDA also cited the agency’s authority to accept gifts as the basis for its acceptance of voluntary services without a written agreement in connection with the heparin crisis. A gift is generally understood to be a gratuitous conveyance without any consideration, the essential elements of which are acceptance, delivery, and the intent to make a gift. By definition, a gift does not give rise to any obligation or liability and poses no risk of subsequent claims for compensation. We do not address the scope of the provision cited by FDA, but note that it does not expressly authorize gifts of services and contemplates that gifts be made by means of some instrument. As discussed above, however, there is no evidence to establish that the external scientists intended to provide their services on a gratuitous basis—that is, to donate their services and the services of others to the agency—that would protect the agency from such claims. FDA increased its monitoring of adverse events, including deaths, associated with heparin and conducted analyses. FDA was unable to link any of the adverse events to contaminated heparin because it was unable to establish a causal relationship due to data limitations and confounding factors involving the individual patients. FDA increased its monitoring of adverse event reports by working with heparin drug and device manufacturers to expedite submission of these reports to FDA. According to FDA officials, FDA contacted Baxter in February 2008 to request early submission of its adverse event reports associated with heparin and requested reports from two other heparin manufacturers, APP and Hospira, later in March 2008. FDA officials said that these reports would otherwise have been due later in the year. A few weeks later, in April 2008, FDA sent a letter to almost 100 manufacturers and distributors of medical devices that contained or were coated with heparin. In this letter, FDA required these firms to submit all reports of heparin-related adverse events within 5 work days of the firm becoming aware of these events, in accordance with federal regulations. This requirement remained in effect for 120 days of the date of the letter from FDA. FDA also monitored trends in the number of reports of adverse events associated with heparin drug products and heparin-containing medical devices that FDA received before, during, and after the crisis. FDA dedicated staff to manage the increased number of heparin-specific reports that the agency received during the crisis and to conduct searches of its AERS and MAUDE databases to retrieve additional related reports that had already been submitted to FDA prior to the crisis. FDA officials said that retrieving and entering information from AERS and MAUDE reports was extremely time and resource intensive in that information had to be entered manually into spreadsheets and duplicate reports had to be removed before the data could be analyzed. FDA officials said that there was a certain baseline number of adverse event reports associated with heparin in 2007 prior to the heparin crisis and that the number of reports of adverse events associated with both heparin drug products and heparin- containing medical devices that FDA received decreased after the heparin crisis, returning to levels typically seen prior to the crisis. For example, FDA received reports of 176 adverse events associated with heparin drug products that took place in February 2008, compared with 13 events that took place in February 2007 and 7 events that took place in February 2009. Figure 3 shows a breakdown of AERS reports of adverse events that resulted in death and reports that did not have a fatal outcome (nondeaths) from January 2007 through June 2009. Regarding trends in adverse events in heparin-containing medical devices, during the crisis in March 2008, FDA conducted a search of the MAUDE database back to January 2005 through December 31, 2007. This search included all medical device products known to contain heparin using a search of terms in the report texts consistent with symptoms or signs with what was known about the contaminant, such as acute respiratory failure and nausea, and FDA identified 23 reports for that 3-year period. Using the same search term criteria, FDA identified 91 MAUDE reports from January 1, 2008, through August 31, 2008, and 16 reports from September 1, 2008, through September 1, 2009, indicating that the number of reports associated with heparin-containing medical devices had decreased since the crisis. FDA conducted analyses of adverse events, including deaths, associated with heparin drug products and heparin-containing medical devices. To analyze adverse events associated with heparin drugs, FDA reviewed a total of 701 AERS reports associated with heparin that the agency received from January 1, 2008, through March 31, 2008. Of the 701 reports, 675 were identified by searching AERS for allergic-type adverse events associated with heparin, such as a drop in blood pressure or acute respiratory failure, for both death and nondeath events. In its analysis of allergic-type adverse events associated with heparin, after excluding 101 allergic-type cases from this analysis, FDA included a total of 526 nondeath AERS reports and 48 death reports. FDA reported descriptive characteristics about this group of reports—for example, the average age of the patients; the manufacturer of the heparin drug product administered to the patients; if known, and the clinical setting where the heparin was administered. FDA also analyzed a total of 94 AERS reports of deaths associated with heparin, which included 68 allergic-type adverse events and an additional 26 death reports that were not identified as allergic-type adverse events. FDA conducted further analyses of these reports using specific assessment criteria to determine whether they were caused by heparin, and concluded that three of the deaths were “probable or likely” linked with heparin. However, FDA did not know whether or not the heparin these patients received was contaminated because the lot numbers of the heparin that these patients received were not reported in the AERS reports. To analyze adverse events associated with heparin-containing medical devices, FDA reviewed a total of 143 MAUDE reports that the agency received from January 1, 2008, through August 31, 2008. FDA reviewed all of the MAUDE reports that FDA received associated with heparin- containing medical devices with an event date occurring during that time period. Of the 143 reports, 128 were nondeath adverse events associated with heparin-containing medical devices, and the remaining 15 MAUDE reports had a death outcome. Three of these deaths were associated with medical devices known to contain contaminated heparin. FDA determined that these MAUDE reports of deaths were unlikely to have been caused by exposure to contaminated heparin, based on similar assessment criteria that FDA used with its analysis of the AERS death reports. (See app. III for FDA’s death assessment criteria, and details of its AERS and MAUDE analyses.) FDA’s analyses of adverse events associated with both heparin and heparin-containing medical devices were constrained by data limitations. For example, FDA officials told us that the agency does not necessarily receive a report for every adverse event that occurs. For drug-related adverse event reports submitted to AERS, manufacturers are required to submit adverse event reports to FDA, but health providers and consumers are not required to do so but may submit such reports on a voluntary basis. For device-related adverse event reports submitted to MAUDE, importers, manufacturers, and user facilities (such as hospitals and nursing homes) are required to report certain device-related adverse events to FDA; others, including health professionals and consumers, may submit such reports on a voluntary basis. In addition, many submitted reports do not include sufficient information to allow FDA to determine if a given report was associated with a contaminated product. FDA officials told us that they followed up on some of the reports of deaths included in the agency’s AERS and MAUDE analyses by contacting the facility or individual that had submitted the report in an attempt to obtain additional information. Further, in our review of the 94 AERS death reports that FDA had analyzed, we found that only 13 reports included information on heparin lot numbers and 28 of the 46 voluntary reports did not list the heparin manufacturer. Consequently, it was not possible for FDA to determine the heparin contamination status in the majority of these deaths. Further, even with complete information, it was difficult for FDA to link patient deaths to contaminated heparin because it was unable to establish a causal relationship due to the confounding factors of individual patients. For example, the FDA official who conducted FDA’s analyses on adverse events associated with heparin-containing medical devices told us that it was hard to separate problems caused by the heparin contained within the medical device from symptoms or events related to the natural course of the underlying disease or condition, concurrently administered medications, or concurrent procedures. In addition, according to FDA officials, many of the patients that died were very sick and had complicated conditions that could themselves have caused the reported events, making it difficult to conclusively link their deaths to contaminated heparin. FDA took various actions in response to the contaminated heparin crisis to help protect the public health. To help minimize the impact on U.S. consumers of heparin, the agency increased its oversight activities and monitoring of adverse events, worked with heparin manufacturers, and collaborated with its international partners. The agency increased its activities related to oversight of heparin firms by increasing the number of inspections and investigations and monitoring heparin imports, and worked with drug and device manufacturers to recall contaminated products while ensuring that an adequate supply of heparin was available. With the help of external entities, FDA identified the unknown contaminant and developed tests to screen heparin products. Agency officials also reached out to international regulatory partners during the crisis to exchange information about contaminated heparin and to help prevent future crises. Within a few months of the agency’s increased efforts and cooperation with other entities, adverse events returned to precrisis levels. While FDA took steps to protect the U.S. public from contaminated heparin, it did not take steps to consider and address risks associated with the way in which it engaged two external scientists and additional external entities engaged by one of these scientists. Although FDA has issued standards on collaboration with external entities in other contexts and governmentwide standards govern the acceptance of services free of charge, FDA did not take steps to ensure that these standards were considered and applied in connection with the heparin crisis. We believe that these standards can be applied in all situations in which the agency collaborates with external entities, including those situations in which time pressures exist. In accepting voluntary services from individuals with ties to heparin firms, including one that was affiliated with a company with a heparin drug product application before FDA for approval, agency officials ran the risk of undermining public confidence in the integrity of FDA’s operations and of subjecting the agency to future claims for payment. FDA is charged with protecting the health of the public from problems related to products that it regulates, and the agency works with external entities when necessary to ensure that it meets this goal. Because adulteration of FDA-regulated products could likely happen again, it is critical that the agency have clear and useful controls in place that it can apply in circumstances similar to those presented by the heparin crisis to help ensure that officials take appropriate steps to consider and address risks posed when engaging external entities. The Department of Health and Human Services (HHS) received a draft of this report and provided comments, which are reprinted in appendix IV. HHS also provided technical comments, which we incorporated as appropriate. In its comments, HHS described the challenges FDA faced when it first learned of severe allergic reactions suffered by dialysis patients during treatment. HHS described how FDA worked to protect the public from contaminated heparin while still ensuring that patients had access to a medically necessary drug. HHS said that FDA needed to identify and enlist the help of leading heparin experts to identify the contaminant in heparin. We agree that FDA faced numerous challenges in responding to the heparin crisis, including the need to obtain expert assistance. However, we also note the potential risks FDA faced in working with external scientists on a voluntary basis in the absence of appropriate controls—the risks of undermining public confidence in its efforts and of future claims for payment. Therefore, in our draft report, we recommended that FDA develop adequate controls to help avoid exposure to these risks when working with external entities in future situations similar to the heparin crisis. Specifically, we recommended that FDA develop a process for considering risks, including consulting with appropriate offices within the agency; develop a process for documenting the steps taken to address risks; and disseminate guidance on these processes for its employees. FDA addressed the draft recommendation by issuing guidance on October 15, 2010, for FDA staff to follow when working with external scientific and other experts in emergency situations when the services are provided on a gratuitous basis. The guidance includes a policy that is responsive to our recommendation, providing broadly for due consideration of risks that may be presented in collaborative arrangements with external entities, including conflicts of interest, as well as for documentation of decisions about addressing such risks. The guidance also includes specific procedures for the provision of gratuitous services, screening for conflicts of interest, and public disclosure. In its comments, HHS also noted that FDA has learned from the heparin crisis to improve its processes for responding to emergencies. Specifically, FDA finalized its new Emergency Operations Plan to respond to future crises. HHS described various actions FDA took to protect the public health during the crisis and steps the agency has taken to safeguard the nation’s heparin supply, including an increased number of inspections of heparin manufacturing and testing facilities related to the U.S. heparin supply. We had previously described these actions in the report. HHS also mentioned legislation currently under consideration by Congress that it believes will, if enacted, provide FDA with helpful tools to further secure the nation’s drug supply chain, and ensure that the agency can hold industry accountable for the security and integrity of its supply chains and the quality control systems it uses to produce drugs for the American people. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Commissioner of FDA and appropriate congressional committees. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix provides a brief review of the scientific research related to heparin contamination, focusing on peer-reviewed research articles published in January 2008 through January 2010. Heparin is an anticoagulant drug; that is, it prevents the formation of blood clots in the veins, arteries, and lungs. It is used before certain types of surgery, including coronary artery bypass graft surgery; in kidney patients before they undergo dialysis; and to prevent or treat other serious conditions, such as deep vein thrombosis and pulmonary emboli. Heparin is also used in medical devices—for example, blood oxygenators or catheters contain or are coated with heparin, and some diagnostic testing products, such as some capillary tubes, are manufactured using heparin. Heparin is a natural product derived from animal tissue. Specifically, most heparin used in the United States is derived from the intestines of pigs. Pig intestines are processed into crude heparin, which is further refined into heparin active pharmaceutical ingredient (API), the active ingredient used in heparin drug products and devices. More than half of the finished heparin products in the United States and globally are made from Chinese- sourced materials. The chemical makeup of heparin is complex. Because heparin is a drug derived from animal tissue, it is not a single chemical, but a mixture of many similar chemical chains of different sizes. Two types of heparin are used in clinical practice: unfractionated heparin (UFH) and low molecular weight heparin (LMWH). The two forms of heparin differ in their molecular size and the route of administration: UFH is comprised of larger molecules than LMWH and is usually administered intravenously, while LMWH is usually administered subcutaneously (that is, injected under the skin). UFH is used often in the United States, whereas in Europe the predominant heparin is LMWH. Researchers and officials we interviewed said that the number of adverse events related to contaminated heparin may have varied by country because of these differences in the type of heparin administered and methods of administration, as well as because of differences in countries’ adverse event reporting systems. In particular, one researcher explained that in the United States, physicians tend to administer a bolus dose of heparin, which is a faster method of administration but places patients at greater risk for a fatal drop in blood pressure. Food and Drug Administration (FDA) officials and their collaborators agreed that oversulfated chondroitin sulfate (OSCS) was a contaminant in the heparin that caused adverse events during the heparin crisis. FDA researchers and their collaborators showed that batches of heparin that had been associated with adverse events contained a contaminant. They identified that substance as OSCS. Chemically, OSCS is similar to heparin, but OSCS is probably not a naturally occurring chemical. The researchers confirmed their identification by matching the contaminant to synthetic OSCS created by chemical modification of chondroitin sulfate, an inexpensive natural product used for the self-treatment of arthritis. Other research articles have provided additional evidence that OSCS was present in contaminated heparin. For example, Clark et al. performed analysis on some contaminated heparin batches and concluded that the properties of the contaminant were consistent with those of OSCS. Viskov et al. showed that the chemical properties of OSCS isolated from a batch of contaminated heparin were similar to those of synthetic OSCS. Finally, Zhang et al. examined samples of heparin from as far back as 1941 and identified the presence of OSCS in a sample from the U.S. market that was produced in 2008. LMWH heparin was also affected by OSCS contamination. Zhang et al. evaluated the sensitivity of OSCS to five different processes similar to ones used in preparing LMWH, and found that these processes varied in the extent to which they affected OSCS. The source of the OSCS contamination is still unknown, and researchers have proposed various hypotheses about the source of the OSCS contamination. For example, Fareed et al. suggested that the contamination of heparin with OSCS was not accidental, but was based on a rational design and prior knowledge of the chemical’s molecular and anticoagulant profiles. Pan et al. conducted an analysis that detected additional under- and oversulfated contaminants in contaminated heparin and proposed that the OSCS present in the contaminated heparin batches could have come from an oversulfated form of a byproduct of the heparin production process, rather than derived from animal cartilage. Another study considered this hypothesis but concluded, based on analysis of oversulfated byproducts provided by Baxter (a major heparin manufacturer), that production byproducts were likely not the source of the OSCS found in contaminated heparin. CDC researchers found a link between adverse events and contaminated heparin. These researchers collected data related to the period November 2007 through January 2008 from 21 dialysis facilities that reported adverse events and 23 facilities that reported no adverse events. With these data, the researchers conducted a case-control study to test whether facility-level risk factors—such as the size of the facility, the type of heparin used at the facility, and the type of dialysis equipment used at the facility—were related to adverse events. They found a significant association between the number of adverse events reported by facilities and their use of Baxter heparin. They reported that the type of adverse reactions experienced by patients who received contaminated heparin varied, but often included low blood pressure and nausea. The researche could not estimate the percentage of patients who experienced adverse reactions after receiving contaminated heparin because the total number of patients in the United States who received heparin during this period is unknown. In other articles, researchers have proposed possible biological mechanisms by which OSCS could have caused the observed adverse events. Researchers have also suggested that exposure to OSCS cou have effects beyond the acute allergic reactions reported during the heparin crisis. For example, one article showed that patients who received dialysis at a university in the United States in 2008 had more of a specif type of antiheparin antibody in their blood than patients who received dialysis in 2006 and 2007, indicating that OSCS may cause an immun response not seen with uncontaminated heparin. Similarly, other researchers have presented data showing that the incidence of hep induced thrombocytopenia, a type of immune reaction to hepari n, increased in Germany during the contaminated heparin crisis. The standard for heparin testing now includes two tests for OSCS. In October 2009, the United States Pharmacopoeia heparin monograph— testing standard applied to all heparin reaching the U.S. market—was revised to specify that nuclear magnetic resonance spectroscopy and the chromatography be used both to positively identify heparin and to ensure the absence of OSCS in a sample. During and after the contaminated heparin crisis, researchers investigated other methods to detect contaminated heparin. For example, FDA researchers have studied a screening method that is capable of detecting oversulfated contaminants like OSCS and could be used to test heparin- coated devices as well as heparin drug products. In addition, researchers have proposed that it might be possible to screen or check heparin using a blood test. Other researchers have investigated the use of more advanced approaches capable of detecting OSCS and other potential contaminants. Diviion of Phrmceticl Anly (St. Loab) FDA reviewed its Adverse Event Reporting System (AERS) for adverse event reports associated with heparin drug products that the agency received from January 1, 2008, through March 31, 2008. FDA conducted two AERS analyses, including an analysis of allergic-type adverse events, including deaths, associated with heparin drug products, and an analysis of reports of deaths associated with heparin drug products that included allergic-type adverse events and reports that were not identified as allergic-type adverse events. To identify reports for its AERS analysis of allergic-type adverse events, including deaths, associated with heparin drug products, FDA used an expanded case definition from the Centers for Disease Control and Prevention’s (CDC) investigation of allergic-type events in hemodialysis patients. The CDC working case definition included confirmed and probable cases. A confirmed case, per the CDC case definition, was defined as an episode of anaphylactic or anaphylactoid reaction (severe hypersensitivity reactions) with angioedema (swelling) or urticaria (hives). A probable case was defined as an episode that included at least two of the following signs and symptoms: (1) generalized or localized sensations of warmth; (2) numbness or tingling of the extremities; (3) difficulty swallowing; (4) shortness of breath, wheezing, or chest tightness; (5) low blood pressure/tachycardia; or (6) nausea or vomiting. Each report in FDA’s AERS analyses of allergic-type adverse events also included at least one Medical Dictionary for Regulatory Activities (MedDRA) preferred term (PT) found under the Standardized MedDRA Query Plus (SMQ+) “anaphylactic reaction” as well as additional non-SMQ preferred terms of interest. MedDRA is clinically validated international medical terminology used by regulatory authorities (see table 1 for a list of FDA’s search term criteria). In addition, AERS cases meeting at least one of the following seven criteria were excluded from further analysis of allergic-types adverse events associated with heparin drug products: 1. cases judged to have a clearly identifiable alternative clinical explanation for the events, 2. cases in which the event reportedly occurred prior to the year 2007, 3. cases that could not be clinically interpreted, 4. cases of heparin-induced thrombocytopenia with or without 5. cases where it was uncertain if the patient was treated with heparin, 6. cases from literature reports that described unrelated issues, and 7. cases reported in error and retracted by the reporter. In its analysis of AERS reports of deaths associated with heparin drug products, FDA included reports of both allergic-type adverse events as well as reports that were not identified as allergic-type adverse events since these cases had a fatal outcome. Table 2 shows the specific assessment criteria that FDA used in its analyses of AERS reports of deaths associated with heparin drug products to determine whether or not there was an association between the event of death and heparin. FDA did not apply these criteria to its analysis of allergic-types adverse events associated with heparin drug products. See figure 4 for details of FDA’s AERS analyses. In its Manufacturer and User Facility Device Experience (MAUDE) analysis of adverse events, including deaths, associated with heparin- containing medical devices, FDA included all MAUDE reports that it received with an event date from January 1, 2008, through August 31, 2008. However, if a MAUDE report did not specifically have an event date listed, but was received by FDA during the specified time period, it was conservatively assumed to have occurred during that time frame and included in its MAUDE analysis. For each MAUDE report of death, FDA considered the patient’s underlying condition, including the severity of the patient’s condition, medications the patient was taking, and concomitant procedures or surgeries being undertaken to determine if there was a plausible explanation for the death. The presence of symptoms using the SMQ+ search terms as noted in table 1 were also taken into account as well as the timing of the event relative to the use of the heparin-containing medical device. In this analysis, FDA used assessment criteria similar to those in table 2 to classify the deaths associated with heparin-containing medical devices that were known to contain contaminated heparin as unlikely. FDA used a time criterion of 3 hours for the occurrence of the event for its MAUDE analysis compared with 1 hour for the AERS analyses because, according to an FDA official, adverse reactions to a heparin- containing medical device could potentially take longer to occur than when a patient receives a heparin drug product intravenously (see fig. 5 for details of FDA’s MAUDE analysis). In addition to the contact named above, key contributors to this report were Tom Conahan, Assistant Director; Susannah Bloch; Helen Desaulniers; Linda Galib; Julian Klazkin; Lisa A. Lusk; and Samantha Poppe.
In early 2008, the Food and Drug Administration (FDA) responded to a crisis involving the contamination of heparin, a medication used to prevent and treat blood clots, when the agency received multiple reports of adverse events involving severe allergic reactions. The crisis took place from January 2008 through May 2008, during which time FDA took several actions in its response to the crisis. GAO was asked to review FDA's management of the heparin crisis. This report examines (1) how FDA prevented additional contaminated heparin from reaching U.S. consumers, (2) how FDA coordinated its response to the contaminated heparin crisis, and (3) FDA's monitoring and analysis of adverse events associated with heparin. To conduct this review, GAO reviewed relevant FDA documents, regulations, and guidance; analyzed FDA data; and interviewed FDA officials and other experts involved in the crisis and knowledgeable about drug quality standards. In its response to the heparin crisis, FDA took several actions related to its responsibility to protect the public health by ensuring the safety and security of the nation's drug and medical device supplies. FDA increased its activities related to oversight of heparin firms by conducting inspections and investigations and monitoring heparin imports, and worked with drug and device manufacturers to recall contaminated products while ensuring that an adequate supply of uncontaminated heparin was available. With the help of external entities, FDA identified the unknown contaminant and developed tests to screen all heparin products. Additionally, the agency reached out to its international regulatory partners during the crisis. However, FDA faced some limitations in its efforts to inspect heparin firms in China and collaborate internationally, and the agency was unable to determine the original source of contamination. FDA coordinated internal and external resources to respond to the contaminated heparin crisis, but did not address risks related to working with certain external entities with ties to heparin firms. The agency has issued standards of ethics regarding collaboration with external entities and governmentwide standards apply to the acceptance of services provided free of charge. Despite these existing standards, FDA did not have processes in place to ensure that it considered or applied them when it accepted assistance from external entities with ties to heparin firms on a voluntary basis during the heparin crisis. Not adequately addressing these risks could have affected the public's confidence in FDA's response efforts and in its other activities related to the regulation of heparin products and also left FDA open to claims for payment for services that these external entities provided to FDA. FDA monitored trends in the number of reports of adverse events associated with heparin drug products and heparin-containing medical devices that it received before, during, and after the crisis. FDA also conducted analyses of adverse events, including deaths, associated with heparin drug products and heparin-containing medical devices. However, FDA was unable to determine if any of the adverse events or deaths were linked to contaminated heparin because of data limitations and confounding factors regarding the individual patients, such as the natural course of the underlying disease or condition. In the draft report we provided to the Department of Health and Human Services for comment, we recommended that FDA develop adequate controls to help avoid exposure to risks when working with external entities in future situations similar to the heparin crisis. In response, FDA issued guidance on October 15, 2010, for FDA staff to follow when working with external scientific and other experts in emergency situations when the services are provided on a gratuitous basis. FDA also stressed the unprecedented nature of the heparin crisis and noted various actions it took in response to the crisis.
Based on our analysis, more than 1,280 CFC charities had federal tax debts totaling $35.6 million as of September 30, 2005. This represented nearly 6 percent of the charities that participated in the OPM-administered 2005 campaign. $27.7 million of this debt represented payroll taxes, penalties, and interest dating as far back as 1988. The remaining $7.9 million includes annual reporting penalties, excise taxes, exempt organization business income, unemployment taxes, and other types of taxes and penalties. In performing our analysis, we took a conservative approach to identifying the amount of tax debt owed by the CFC’s charities, and therefore the number of delinquent charities and amount due to the IRS are likely understated. We also found that at least 170 charities with unpaid taxes also benefited by receiving about $1.6 billion in federal grants. As indicated in figure 1, payroll taxes comprised $27.7 million, or almost 80 percent, of the $35.6 million in unpaid federal taxes owed by CFC charities. Unpaid payroll taxes included amounts that were withheld from employees’ wages for federal income taxes, Social Security, and Medicare but not remitted to the IRS, as well as the matching employer contributions for Social Security and Medicare. Employers who fail to remit payroll taxes to the federal government may be subject to civil and criminal penalties. Figure 1 shows the types of federal taxes owed by CFC charities as of September 30, 2005. The next largest component, annual reporting penalties, was $4.5 million or almost 13 percent of the unpaid taxes. Generally, the IRS requires 501(c)(3) charities with more than $25,000 of income to file an annual return (i.e., Form 990). This annual return serves as the basis for review in determining whether an organization continues to meet requirements for exempt status. Failure to file an annual return at all or in a timely manner, as well as filing an incomplete return, results in various types of penalties. Excise taxes related to employee benefit plans, exempt organization business income taxes, unemployment, and other types of taxes and penalties comprised the remaining $3.4 million. The majority of the approximately 1,280 delinquent charities, 78 percent, owed less than $10,000 in delinquent taxes. Fifteen percent owed from $10,000 to $50,000, and 7 percent owed more than $50,000 in delinquent taxes. Also, 91 percent of 1,280 charities were delinquent for up to 4 tax periods, 7 percent of charities for 5 to 9 tax periods, and 2 percent for 10 or more tax periods. The amount of unpaid federal taxes we identified among CFC charities— $35.6 million—is understated. To avoid overestimating the amount owed by CFC charities, we intentionally limited our scope to tax debts that were affirmed by either the charity or a tax court for tax periods prior to 2005. We did not include the most current tax year because recently assessed tax debts that appear as unpaid taxes may involve matters that are routinely resolved between the taxpayer and the IRS, with the taxes paid, abated, or both within a short period. We eliminated these types of debt by focusing on unpaid federal taxes for tax periods prior to calendar year 2005 and eliminating tax debt of $100 or less. Also limiting our estimate of CFC charities’ unpaid federal taxes is the fact that the IRS tax database reflects only the amount of unpaid taxes either reported by the charity on a tax return or assessed by the IRS through various enforcement programs. The IRS database upon which we relied exclusively does not reflect amounts owed by charities that have not filed tax returns or that have underreported the owed taxes in their return and for which the IRS has not assessed tax amounts due. According to the IRS, underreporting of payroll taxes accounts for about $60 to $70 billion of the estimated $345 billion annual gross tax gap. Consequently, the true extent of unpaid taxes for these charities is unknown. In performing our analysis, we identified at least 170 of the CFC charities with delinquent tax debt that also received federal grants totaling about $1.6 billion from the Departments of Health and Human Services (excluding Medicaid), Education, and others in 2005. These charities are benefiting from the federal government through their tax-exempt status and receipt of substantial amounts of federal grants, while not meeting their responsibility to pay required federal taxes. Included in the $1.6 billion are grants to 5 of the 15 charities we selected, totaling more than $6.5 million. Executives responsible for the tax debts of the 15 charities we investigated abused the federal tax system and may have violated the law by diverting payroll or other taxes due to the IRS. Willful failure to remit payroll taxes is a felony under U.S. law, and the IRS can assess a trust fund recovery penalty (TFRP) equal to the total amount of taxes not collected or not accounted for and paid against all individuals who are determined by the IRS to be “willful and responsible” for the nonpayment of withheld payroll taxes. In this regard, one executive from these 15 case study CFC charities was assessed a TFRP for what IRS determined to be his abusive behavior. Table 1 highlights 5 of the 15 case study CFC charities that we investigated with payroll tax issues. For the five charities in table 1, tax debt ranged from about $100,000 to more than $1.5 million, and the unpaid taxes spanned a period ranging from 5 to more than 12 payroll tax periods. In addition to the federal tax debt, two of the five CFC charities had unpaid state and/or local taxes, where state and/or local taxing authorities filed multiple tax liens against them. During the time frames for which these charities were not paying their taxes, funds were available to cover other charity expenses, including officer salaries. Executives at two charities explained that they knowingly withheld payroll taxes in order to have enough funds available to pay their own salaries and the salaries of charity employees, in addition to charity expenses. One executive we investigated denied owing payroll or other taxes when IRS records showed otherwise. In at least one case, the charity’s executives remitted payroll taxes later than the IRS required to pay their salaries, while the charity accumulated tens of thousands of dollars in penalties and interest for remitting late. We also identified directors and senior executives who potentially could be assessed TFRPs by the IRS for the debts of their charities. Some of these directors and executives had salaries in excess of $100,000 and owned significant personal assets. One of these executives has already been assessed a TFRP. See appendix III for the details on the other 10 CFC charities reviewed in detail. We referred all 15 cases discussed in our report to the IRS so that it can determine whether additional collection action or criminal investigation is warranted. OPM does not screen charities for federal tax debt prior to granting CFC eligibility, thereby making charities with unpaid federal taxes eligible to receive donations from federal civilian employees and military personnel. OPM policies do not specifically require CFC charities to be screened for these problems. Additionally, federal law generally prohibits the disclosure of taxpayer data and, consequently, even if OPM had specific policies to check for unpaid taxes, it has no access to a specific charity’s tax data. OPM determines the completeness of a charity applicants’ paperwork, but it does not perform third-party verification of documents as part of that process. For example, OPM does not verify with the IRS the tax-exempt status of CFC applicants and relies solely on each applicant’s submission of IRS documentation that it is a bona fide charity. To demonstrate the vulnerability of OPM’s lack of validation of tax-exempt status, we applied to three of CFC’s largest local 2006 campaigns using a fictitious charity with entirely false documents and an erroneous IRS taxpayer identification number. We were accepted into all three campaigns. OPM does not screen charities for tax debts prior to granting CFC eligibility and, ultimately, charities with unpaid federal taxes are eligible to receive donations from federal civilian employees and military personnel. Federal law implemented in the Code of Federal Regulations does not require OPM to screen charities for federal tax delinquency nor does it explicitly authorize CFC to reject charity applicants that have delinquent tax debt from participation in the CFC. Consequently, CFC’s processes for determining eligibility are based on and limited to what is required of the CFC in Part 950 of Title 5, C.F.R. Federal law does not permit the IRS to disclose taxpayer information, including tax debts. Thus, unless the taxpayer provides consent, certain tax debt information can only be discovered from public records when the IRS files a federal tax lien against the property of a tax debtor. However, public record information is limited because the IRS does not file tax liens on all tax debtors, and, while the IRS has a central repository of tax liens, OPM officials do not have access to that information. Further, the listing of a federal tax lien in the credit reports of an entity or its key officials may not be a reliable indicator of a charity’s tax indebtedness because of deficiencies in the IRS’s internal controls that have resulted in the IRS not always releasing tax liens from property when the tax debt has been satisfied. Part 950 of Title 5 of the Code of Federal Regulations requires that applicants to the CFC include in their application packages a copy of their most recent IRS determination letter showing the charity’s 501(c)(3) status. OPM does not perform any independent verification of charity applicants’ tax-exempt status. The IRS does have publicly available data wherein OPM could verify an applicant’s tax exempt status, but this is not an OPM-required procedure in the CFC eligibility determination process. Other documents OPM requires applicants to include in the CFC application package are a copy of the charity’s most recent form 990, their most recent annual audit report, and an application with various self- certifications. According to an official from one of the CFC’s largest local campaigns, the single most frequent reason for rejecting an applicant from the CFC is the applicant’s failure to submit its IRS determination letter. To determine whether and to what extent CFC’s eligibility determination processes are vulnerable, we applied to three local campaigns with a fictitious charity using fake documents and an erroneous IRS taxpayer identification number. In all three campaigns, our application for participation in the 2006 CFC was accepted. Figure 2 shows one example of the three letters we received regarding our acceptance into the 2006 CFC. Immediately after our applications were accepted, we notified CFC officials and withdrew our charity from the campaigns in order to prevent donations to our fictitious charity. In addition to our direct testing of OPM’s screening process, our match of CFC charities from the 2005 campaign against IRS’s database of tax- exempt organizations identified charities whose 501(c)(3) status could not be confirmed. Therefore, we referred these charities to OPM and IRS for further review and confirmation of their tax-exempt status. The success of the OPM’s CFC is predicated on each donor’s confidence in a system that ensures that their donations reach charitable organizations that have met the CFC’s specific eligibility requirements and are legitimate charities. The bona fide charities participating in the annual campaign have the most to lose when such confidence is shaken because of the abuse of a minority of participating charities. Until OPM takes steps to independently validate whether applicants are legitimate 501(c)(3) organizations, the campaign is vulnerable to entities that fraudulently purport to be charities. Further, tax-abusing charities will continue to benefit by being eligible to participate and receive donations unless OPM is provided access to their tax debt information and determines whether sanctions such as expulsion from the CFC are warranted. OPM and each local CFC cannot provide the assurance needed to sustain such confidence. This could have devastating consequences for the vast majority of eligible and tax-compliant charities that are dependent on donor contributions to support their critical missions. Mr. Chairman and Members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-7455 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Our objectives were to investigate and determine whether and to what extent (1) charities listed in the 2005 Combined Federal Campaign (CFC) have unpaid payroll and other federal taxes; (2) selected charities, their directors, or senior officers are abusing the federal tax system; and (3) the Office of Personnel Management (OPM) screens charities for federal tax problems before allowing them to be listed with the CFC. To determine whether any of the charities listed in the 2005 CFC have unpaid payroll and other federal taxes, we first identified charities that participated in the 2005 campaign. To identify CFC charities we requested data from CFC headquarters. To obtain these data, CFC headquarters requested data from the 299 local campaigns throughout the United States. We received data from 291 of the 299 local campaigns. To identify CFC charities with unpaid federal taxes, we obtained and analyzed the Internal Revenue Service’s (IRS) September 30, 2005, Unpaid Assessments file. We matched the CFC charity data to the IRS unpaid assessment data using the taxpayer identification number (TIN) field. To avoid overstating the amount owed by charities with unpaid federal tax debts and to capture only significant tax debt, we excluded tax debts meeting specific criteria. The criteria we used to exclude tax debts are as follows: tax debts the IRS classified as compliance assessments or memo accounts for financial reporting, tax debts from calendar year 2005 tax periods, and charities with total unpaid taxes of $100 or less. The criteria above were used to exclude tax debts that might be under dispute or generally duplicative or invalid and tax debts that are recently incurred. Specifically, compliance assessments or memo accounts were excluded because these taxes have neither been agreed to by the taxpayers nor affirmed by the court, or these taxes could be invalid or duplicative of other taxes already reported. We excluded tax debts from calendar year 2005 tax periods to eliminate tax debt that may involve matters that are routinely resolved between the taxpayers and the IRS, with the taxes paid or abated within a short period. We also excluded tax debts of $100 or less because they are insignificant for the purpose of determining the extent of taxes owed by CFC charities. The 2005 pledged donation (pledges) information was unavailable at the time we selected our charity cases for investigations. We requested pledge information from the CFC and were in the process of receiving these data, piecemeal, from the CFC’s 299 campaigns as of the end of our fieldwork. The pledge information we received through the end of fieldwork lacked the detail necessary to efficiently determine the amount of pledges for tax- delinquent charities. Consequently, we were unable to determine the amount of pledges received for tax-delinquent charities we identified. To determine whether selected charities, their directors, or senior officers are abusing the federal tax system, we selected 15 charities for a detailed audit and investigation. We selected the 15 charities using a nonrepresentative selection approach based on our judgment, data mining, and a number of other criteria, including the amount of unpaid taxes, number of unpaid tax periods, amount of payments reported by the IRS, and indications that key officials might be involved in multiple charities with tax debts. We obtained copies of automated tax transcripts and other tax records (for example, revenue officers’ notes) from the IRS as of September 30, 2005, and reviewed these records to exclude charities that had recently paid off their unpaid tax balances and considered other factors before reducing the selection of charities to 15 case studies. For the selected 15 cases, we reviewed the charity CFC application files and performed additional searches of criminal, financial, and public records. Our investigators also contacted several of the charities and conducted interviews. To determine whether and to what extent OPM screens charities for federal tax problems before allowing them to be listed with the CFC, we reviewed OPM’s policies and procedures, performed process walkthroughs, and interviewed key CFC officials at CFC Headquarters and three local campaigns. We reviewed laws and regulations governing OPM’s administration of the CFC. We identified processes and procedures performed by the CFC during the annual application period. To confirm our understanding of the requirements placed on charity applicants and to test whether OPM’s processes would identify fraudulent charities, we attempted to gain acceptance into the 2006 CFC by posing as a charity. We prepared and submitted application packages for each of three local campaigns using fake documentation for a fictitious charity. To test the effectiveness of OPM’s processes and procedures to identify charity applicants that are not valid tax-exempt organizations, a primary requirement for participation in the CFC, we matched the list of CFC charities that participated in the 2005 campaign with the IRS’s database of tax-exempt organizations. We conducted our audit work from January 2006 through May 2006 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. For the IRS unpaid assessments data, we relied on the work we performed during our annual audits of the IRS’s financial statements. While our financial statement audits have identified some data reliability problems associated with the coding of some of the fields in the IRS’s tax records, including errors and delays in recording taxpayer information and payments, we determined that the data were sufficiently reliable to address this testimony’s objectives. Our financial audit procedures, including the reconciliation of the value of unpaid taxes recorded in IRS’s master file to IRS’s general ledger, identified no material differences. To help ensure reliability of CFC-provided data, we performed electronic testing of specific data elements in the databases that we used to perform our work and performed other procedures to ensure the accuracy of the charity data provided by the CFC. Based on our discussions with agency officials, our review of agency documents, and our own testing, we concluded that the data elements used for this testimony were sufficiently reliable for our purposes. The Combined Federal Campaign (CFC) is the only authorized solicitation of employees in the federal workplace on behalf of charitable organizations. The CFC’s mission is to promote and support philanthropy through a program that provides all federal employees the opportunity to improve the quality of life for others through donations to eligible nonprofit organizations. In 1971, the CFC began operation as a combined campaign with donations solicited once a year. Also during this period, charitable contributions in the form of payroll deduction were made possible. Contributions grew dramatically from $12.9 million in 1964 to $82.8 million in 1979. Growth in the number of participating charities was slow through the 1970s, increasing from 23 charities in 1969 to only 33 charities in 1979. Significant changes in CFC regulations occurred in the late 1970s and early 1980s which in April 1984 opened the CFC to organizations that received tax-exempt status under 501(c)(3) of the Internal Revenue Code. The CFC has grown to a campaign consisting of approximately 1,700 (2005 campaign) national and international charitable organizations and more than 21,000 local charities. Contributions have also increased from about $95 million in 1981 to more than $255 million in 2004. Each campaign is conducted during a 6-week period, varying by local campaign from September 1 through December 15, at every federal agency in the campaign community. During this period, current federal civilian and active duty military employees, throughout the country and internationally, donate tens of millions of dollars to these nonprofit organizations that provide health and human service benefits throughout the world. The Director of the Office of Personnel Management (OPM) exercises general supervision over all operations of the CFC and takes steps to ensure the campaign objectives are achieved. The CFC is decentralized; therefore, each of the approximately 300 campaigns manages its local campaign and then reports statistics in aggregate to OPM. The Local Federal Coordinating Committee (LFCC) is the leadership element of the local CFC and is comprised of members from the federal community— federal civilian, military, and postal. The LFCC solicits annually a principle combined fund organization (PCFO), conducts local agency eligibility, approves campaign material, conducts compliance audits, is the liaison to federal agency heads, and is generally engaged in a host of the scheduled campaign activities. The PCFO manages all aspects of the campaign. The PCFO develops campaign materials; serves as fiscal agent; collects, processes, and distributes pledges; and trains loaned executives and campaign personnel. The PCFO and the LFCC are responsible for reporting to the OPM summary data about their campaign results. Table 1 in the main portion of this testimony provides data on 5 detailed case studies. Table 2 shows the remaining case studies that we audited and investigated. As with the 5 cases discussed in the body of this testimony, for all 10 of these case studies we found abuse or potentially criminal activity related to the federal tax system. All 10 charities in table 2 had unpaid payroll taxes.
The Office of Personnel Management (OPM) administers the annual Combined Federal Campaign (CFC), which gave more than 22,000 charities access to the federal workplace, helping those in need by collecting more than $250 million in donations during the 2005 campaign. The success of the campaign is predicated on each donor's confidence in a system that ensures donations reach charitable organizations that have met the CFC's specific eligibility requirements and are legitimate charities. For example, to be eligible, each charity must have formally received from the Internal Revenue Service (IRS) tax-exemption designation under 501(c)(3) of the Internal Revenue Code. The Subcommittee on Oversight is reviewing tax-exempt status entities and asked GAO to determine whether charitable organizations participating in the CFC were remitting their payroll and other taxes to the IRS as required by law. Specifically, GAO was asked to investigate and determine whether and to what extent (1) charities listed in the 2005 CFC have unpaid payroll and other taxes; (2) selected charities, their directors or senior officers are abusing the federal tax system; and (3) OPM screens charities for federal tax problems before allowing them to be listed with the CFC. More than 1,280 CFC charities, or about 6 percent of charities in the OPM- administered 2005 campaign, had tax debts totaling approximately $36 million as of September 30, 2005. The majority of delinquent charities owed less than $10,000. Approximately $28 million of this debt represented payroll taxes, penalties, and interest dating back as far as 1988. The remaining $8 million represented annual reporting penalties, excise taxes, exempt organization business income, unemployment taxes, and other types of taxes and penalties during this same period. Further, at least 170 of the charities with tax debt received about $1.6 billion in federal grants in 2005. GAO investigated 15 CFC charities, selected primarily for the amount and age of their outstanding tax debt. All 15 charities engaged in abusive and potentially criminal activity related to the federal tax system. Although exempt from certain taxes (e.g., federal income tax), these charities had not forwarded payroll taxes withheld from their employees along with other taxes to the IRS. Willful failure to remit payroll taxes is a felony under U.S. law. However, rather than fulfill their role as trustees of this money and forward it to the IRS, the directors and senior officers diverted the money for charity-related expenses, including their own salaries, some of which were in excess of $100,000. We referred all 15 of these charities to the IRS for consideration of additional collection or criminal investigation. OPM does not screen CFC charities for federal tax problems or independently validate with the IRS whether the charity is truly a tax-exempt organization. Federal law prevents OPM from accessing taxpayer information required to screen for tax delinquency, although information on exempt status is available to the public. Consequently, OPM was unaware of the charities that owed federal tax debt and cannot provide assurance that the more than 22,000 participating charities are tax-exempt organizations. To demonstrate the vulnerability of this process, GAO created a fictitious charity and successfully applied to three large local campaigns.
RTC was required by law to assist minorities to acquire failed thrifts. Specifically, the RTC Completion Act required RTC to give preference to any offer from minority bidders for acquiring failed thrifts located in predominantly minority neighborhoods (PMN) that would result in the same cost to RTC as determined under section 13(c)(4) of the Federal Deposit Insurance Act, as amended by the Federal Deposit Insurance Corporation Improvement Act of 1991. This section of the act requires RTC to choose the alternative for resolving a failed thrift that results in the least cost to RTC. Additionally, a minority acquirer of a thrift in a PMN was to have first priority in the deposition of performing assets of failed thrifts. To satisfy these requirements, RTC established its minority preference resolutions program in February 1994. Under this multifaceted program, RTC was to offer a failed minority-owned thrift to investors of the same minority group before offering it to others. Additionally, bidding preferences were to be given to offers from minority-owned financial institutions to acquire any failed thrift whose home office was located in a PMN or that had 50 percent or more of its offices in PMNs, provided that this preference would not increase the cost to RTC. Specifically, under the preference, if a minority bidder was within 10 percent of the highest bid made by a nonminority bidder, then a “best and final” round of bidding was to take place between them. As part of this program, RTC was also to provide a winning minority bidder with (1) interim capital assistance of up to two-thirds of the required regulatory capital and (2) branch facilities, located in a PMN and owned by RTC, on a rent-free basis for 5 years. In July 1994, RTC issued procedures for selling 1- to 4-family residential mortgage loans to acquirers of whole thrifts or branches under this program. In essence, after the sale of a thrift, RTC was to have 45 days to develop the preliminary pricing of the loans to be sold, and the minority acquirer was to have up to 90 days to review the loans. When this review was completed, RTC was to give the acquirer the final sales prices for the loans. The acquirer was then to have 3 days to decide which loans to purchase and a fourth day to notify RTC of its choice. Minority acquirers could purchase loans of up to 100 percent of the net deposits assumed from RTC in the acquisition of the failed thrift. The process RTC established to sell performing 1- to 4-family residential mortgage loans to minority acquirers has undergone several changes, in part because of concerns raised by a group of seven minority acquirers. This group believed (1) that RTC should not be responsible for pricing the loans, (2) that RTC’s current methodology resulted in the loans being overpriced, and (3) that the resale provision was unfair. In March 1994, RTC stated that it would have its own staff price the loans. However, to ensure that the pricing was done in an equitable manner, in June 1994 RTC hired two asset valuation contractors to independently price the mortgage loans, thus removing itself from the pricing process. To ensure objectivity, RTC awarded fixed-fee contracts whereby neither the sales price established for the loans nor the price paid by the purchaser was a factor in determining the fee paid to the asset valuation contractors. Additionally, RTC’s March 1994 pricing procedures and mortgage loan sales agreement stated that RTC would be entitled to receive 50 percent of the acquirer’s profit if the acquirer sold any of the mortgage loans prior to 181 days after the closing of the sales agreement. However, by June 1994, RTC had decided to eliminate this resale provision based on concerns raised by the minority group. Finally, under the provisions of the mortgage loan sale agreement, RTC was expected to credit, to the minority acquirers who exercised their option to purchase the mortgage loans, the interest accrued on the loans selected. The period of accrual was to begin 45 days after the signing of the agreement and end on the day preceding the closing date of the transaction. The accrued interest is defined in RTC’s minority loan pricing procedures as the coupon interest rate on the loans less the average federal funds’ rate during the accrual period. However, to resolve a contract dispute regarding the final pricing of the mortgage loans, RTC provided the minority acquirers who decided not to purchase the mortgage loans with the following option—the acquirer could choose not to exercise the agreement on the loan portfolio, but rather receive the interest accrued on the respective portfolio. Under this option, the acquirer waived the right to purchase any 1- to 4-family residential mortgage loans through the minority resolutions preference program. The RTC Completion Act required that we submit an annual report to Congress on transfers of performing assets by RTC to any acquirer. In discussions with the oversight committees, it was agreed that our report would focus on assets sold to minority acquirers. Specifically, the objectives of our review were to (1) assess how RTC determined the fair market value for the loans transferred and (2) ascertain the number and description of performing loans transferred to minority thrifts. Although the act required us to assess RTC’s determination of fair market value for the loans transferred, we were unable to evaluate RTC’s determination for the following reasons. First, fair market value is commonly measured through competitive sales. Second, loans not purchased by minority acquirers were sold in bulk, and sales prices were not assigned to individual loans. Third, there were no data available to compare the prices of the mortgage loans sold to minority acquirers with the prices of other loans sold by RTC, because fewer whole loans were available for sale once RTC’s securitization program started showing results around June 1991. Therefore, we focused on assessing the reasonableness of the process RTC established to price the mortgage loans, including the methodology used by RTC’s valuation contractors. To assess reasonableness, we discussed the methodology and models used to price the 1- to 4-family residential mortgage loans with RTC officials and the two valuation contractors. While both valuation contractors were cooperative in discussing the methodology in general, they were reluctant to discuss their pricing models in detail. The valuation contractors considered the specifics of these models to be proprietary because each firm has individually and confidentially developed its own model. This did not affect our determination of the reasonableness of the these models because we were able to determine what factors were considered in the models. We also interviewed two of RTC’s three due diligence contractors to understand their role and responsibilities. Further, we reviewed RTC’s policies and procedures and the valuation contractors’ operating guidelines. To obtain additional perspectives on RTC’s process, we interviewed officials of Freddie Mac and Fannie Mae who were involved in valuing asset portfolios similar in type to RTC’s assets. To better understand how mortgage loans are valued, we also reviewed academic literature on mortgage loan valuation. Finally, we were contacted by seven minority acquirers and their advisers after they met with RTC and learned of our review. We subsequently met with them to understand their experiences in purchasing loans from RTC under the minority preference resolutions program. As a follow-up to that meeting, we also interviewed their valuation contractor to obtain information on the methodology used to assess the price of the loans. To accomplish our second objective, which was to determine the number and type of loans sold to minority acquirers, we obtained and analyzed, but did not independently verify, RTC transaction reports showing asset sales through the minority preference resolutions program. These reports identify the type and number of loans sold, as well as their quality, price, and purchaser. In addition, we also interviewed RTC officials regarding the reliability of the reports. We requested comments on a draft of this report from the Deputy and Acting Chief Executive Officer of RTC or his designee. On November 28, 1995, RTC’s Vice President for Asset Management and Sales provided a written response in which he concurred with our findings. These comments are reprinted in appendix I. We did our work between August 1994 and October 1995 in accordance with generally accepted government auditing standards. The pricing of mortgage loans is a difficult and complex process requiring the use of a sophisticated and technical methodology. RTC established a reasonable process to price mortgage loans that was anchored to agency and mortgage securities markets standards. This process provided for an independent valuation of 1- to 4-family residential mortgage loans that were offered for sale to minority acquirers of failed thrifts located in PMNs. It is first important to note that RTC did not price the loans itself; instead, in June 1994 it hired two independent valuation contractors experienced in mortgage securities markets to determine the price of each mortgage loan. Each valuation contractor was required to price the mortgage loans on an individual basis, rather than at a portfolio level, because the acquirers were allowed to purchase some, all, or none of the loans under the minority preference resolutions program. RTC also hired three due diligence contractors to preliminarily review each loan to determine whether it was eligible for sale under the minority preference resolutions program. To be eligible for sale under the program, the loan had to be a performing 1- to 4-family residential mortgage type. The purpose of RTC’s due diligence loan file review was to secure essential information that could be used to evaluate the loans for sale. Some of the essential documents included the loan note, mortgage insurance certificate, title, appraisal, and credit and verification forms. The due diligence contractors were not required to make judgments about credit decisions or the loan’s salability. According to RTC’s valuation contractors, the pricing of the mortgage loans began upon receipt of the loan data files from RTC’s due diligence contractors. The valuation contractors were to review these data files for completeness and accuracy and to notify RTC of any errors or missing documents in the loan file. RTC told us that it generally resolved these deficiencies by requiring the due diligence contractors to update the loan file. The valuation contractors said that the loans were then stratified to determine whether individual loans conformed to secondary market standards. Using RTC’s stratification criteria, the two valuation contractors were to group the loans into three levels, referred to as “strats.” According to RTC, its stratification criteria were based on agency and secondary market standards and reflected the minority preference resolutions program guidelines. See table 1 for RTC’s criteria for the strat categories. Under stratification, the two valuation contractors assigned strat codes based on the loan data provided by the due diligence contractors. The valuation contractors’ pricing reports showed that they assigned the same strat for the majority of the loans. The two valuation contractors said that, in cases where there were missing loan data, the loans were considered to be of lower quality and were therefore assigned to a higher strat category. After loan stratification, both valuation contractors used standard mortgage-backed security methodologies to price each RTC mortgage loan. This was done to determine the mortgage loan’s market value as objectively as possible. The initial step under this approach was to assign each loan a benchmark price, which approximated the market value of a mortgage loan at a given point in time. For example, Freddie Mac’s 1-year adjustable rate mortgage price was generally used as the benchmark for adjustable rate mortgages. According to one valuation contractor, selecting an appropriate benchmark price is a critical step in this methodology. According to the valuation contractors, the agency benchmark price assigned to each mortgage loan was first determined by matching a loan’s characteristics to the most closely similar agency mortgage-backed security. Second, after the loan servicing fee was subtracted from the current rate on RTC’s mortgage loan, the loan’s interest rate and the agency’s interest rate were matched. For example, a RTC fixed rate mortgage loan with an 8-percent interest rate net of loan servicing would be matched with an agency mortgage-backed security with an 8-percent fixed rate. Next, an equivalent benchmark price from the mortgage-backed security price database was selected. The two valuation contractors stated that, when determining the preliminary and final prices, they were required by RTC to use secondary market data from the close of business of the Wednesday prior to their receiving the loan data files. This pricing data, obtained from Knight-Ridder, was a composite of prices from seven different sources, updated daily. Thus, the pricing reflected the actual market value of the mortgage loans purchased at that time. According to one valuation contractor, consistently using one date in time minimized subjectivity. Once the agency benchmark price was determined, adjustments for movement in interest rates and credit risk sensitivity were made. To determine the adjustments for interest rate risk, both valuation contractors said they used an analytical technique known as option-adjusted spread model. This model priced a mortgage loan or mortgage-backed security by simulating many different future patterns of interest rates. The model then used these simulations and the specific characteristics of the mortgage loan to predict prepayments, which determined the cash flow of the mortgage. Finally, the model matched the predicted cash flow to the current mortgage prices, to determine the price for the mortgage in question. In addition to the adjustments made for interest rate risks, assessments for credit risk were done to estimate the probability of default. To determine the discounts for credit risk, both contractors analyzed each loan and assigned it a risk weight based on characteristics that affect risk. These risk characteristics include loan-to-value (LTV) ratio, geographic location, mortgage insurance, and delinquency status. Both valuation contractors agreed that an important variable in determining the severity of risk was the current LTV ratio, because it provided a reliable valuation of the collateral. In general, the higher the LTV the greater the risk. After the risk weights were assigned, they were multiplied together to obtain the total credit risk. The contractors acknowledged that this part of the process was slightly subjective, but they agreed that this was an accepted technique in the secondary market. The final adjustments were for the strat category, unusual loan types (such as balloon mortgages where the balance of the loan was due in one lump sum on a specified date), and the fact that these were RTC loans from a failed thrift. The two valuation contractors also said that determining these adjustments was a subjective matter, but both believed that correct assessments of these discounts depended heavily on previous experience in valuing and marketing RTC assets—experience which both contractors possessed. Each valuation contractor provided RTC with each loan’s final price and strat category. After receiving the two reports, RTC averaged the two loan prices. The averaged price was offered to the minority acquirers as the final price. RTC’s data showed that, although the two contractors worked independently to price the 4,063 mortgage loans, there were fewer than 100 cases in which they differed on the final price of the mortgage loan. The difference in price was usually less than half a percent. However, in cases where they differed significantly, RTC required both valuation contractors to reprice the loans. We discussed the mortgage-backed securities approach used by RTC’s contractors with officials from Fannie Mae and Freddie Mac, who said that the methodology appeared to be reasonable. Specifically, the officials said that the methodology was similar to the approach they used to value mortgage loans and contained the elements necessary to value mortgage loans. For example, although officials at Fannie Mae and Freddie Mac would not discuss the specifics of their models because they are considered proprietary, they explained that measuring interest rate movement and credit risk sensitivity are very important steps in valuing mortgage loans. Seven minority acquirers and RTC were unable to agree on the mortgage loan prices. These acquirers believed that the mortgage loans were overpriced. They also believed that RTC’s pricing methodology did not establish the fair market value of the mortgage loans. They therefore discussed with RTC the possibility of using an alternative methodology to price the loans. RTC decided not to use the alternative methodology proposed by the minority acquirers because it believed that the methodology being used established a fair market value for the loans and that none of the mortgage loans were overpriced. The alternative methodology proposed by the minority acquirers was similar to the asset valuation reviews (AVR) used by RTC in 1992. Under the AVR process, RTC hired independent contractors to review samples of assets to estimate the potential loss for each asset category held by failed thrifts. AVR computes the present value of such assets using a discount rate based on secondary markets and adjusted for risk-related factors such as loan documentation. The estimated recovery values were not determined for individual loans in a portfolio, but rather for the category as a whole. In their efforts to demonstrate that the mortgage loans were overpriced, the seven minority acquirers contracted with a firm to complete an analysis of the mortgage loans RTC made available for sale under the minority preference resolutions program. While the approach of the minority acquirers’ contractor was somewhat similar to that used by RTC’s contractors, there were also fundamental differences. For example, RTC’s contractors stratified and assigned benchmarks to each loan, while the minority acquirers’ contractor stated that benchmarks were not determined for individual loans, but rather for the portfolio as a whole. Additionally, RTC’s contractors and the minority acquirers’ contractor also differed on the coefficients, which are risk weight factors used in calculating the loan price. As previously stated, the approach of the minority acquirers’ contractor was similar to RTC’s AVR process. In summary, to determine the mortgage loan price, the minority acquirers’ contractor said it used a discounted cash flow methodology based on the assets’ expected income and yields on mortgage trading in the secondary market. The price was then adjusted for risk-related factors, such as the probability of default, loan quality, and document deficiencies. To determine the adjustments for movement in interest rates, prepayment speeds were estimated using the Wall Street consensus speeds for like mortgage loan rates. Cash flows for each loan type were calculated using loan characteristics and prepayment speeds. These cash flows were discounted to determine the market value of the loans. Finally, the minority acquirers’ contractor believed that a yield premium, to account for the fact that the loans were being provided in conjunction with an acquisition of marginal quality deposit liabilities, was also appropriate. The seven minority acquirers and their contractor contended that the AVR approach was an acceptable methodology to price the assets because RTC had used it in the past. However, an RTC official stated that their process for pricing mortgage loans had evolved over the years and that they no longer used the AVR approach. RTC believed that the current mortgage-backed security approach resulted in a better determination of fair market value and attempted to maximize total return on the disposition of a failed thrift’s assets, as required by law. RTC set aside about $3 billion in residential mortgage loans for possible sale to minority acquirers through its minority preference resolutions program. Between January 1994 and September 1995, RTC offered 16 pools of performing 1- to 4-family residential mortgage loans to the 14 minority acquirers who purchased failed thrifts located in PMNs. As of October 11, 1995, 11 minority acquirers had purchased 4,063 mortgage loans for $289.6 million. Table 2 provides detailed information on their 13 transactions. Additionally, our analysis of the final loan prices showed that, of the 4,063 mortgage loans, 64 percent, or 2,606, were sold for between 91 and 100 percent of the book value, as shown in figure 1. Finally, as of October 11, 1995, RTC had paid $4 million in accrued interest to the 11 minority acquirers who purchased mortgage loans and $1.4 million in accrued interest to 3 minority acquirers who decided not to exercise their option to purchase mortgage loans. RTC officials believe that paying the $1.4 million in accrued interest was the best alternative to resolving a contract dispute with the minority acquirers over the final pricing of the mortgage loans. We did not determine whether this practice was the best alternative to resolving the contract dispute because it was outside the scope of our assignment. We are sending copies of this report to other interested congressional committees and subcommittees, RTC’s Deputy and Acting Chief Executive Officer, and the Chairman of the Thrift Depositor Protection Oversight Board. Copies will be made available to others upon request. This report was prepared under the direction of Ronald L. King, Assistant Director, Government Business Operations Issues. Other major contributors to this report are listed in appendix II. If you have any questions, please contact me on (202) 736-0479. William McNaught, Economist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Resolution Trust Corporation's (RTC) efforts during fiscal year (FY) 1995 to sell performing assets to acquirers of failed thrifts under the Minority Preference Resolutions Program. GAO found that: (1) RTC established a reasonable process for the independent valuation of residential mortgage loans that were offered for sale to minority acquirers; (2) RTC contracted out the initial phase of the loan pricing process to be fair to minority acquirers while maximizing the total return on asset disposition; (3) 11 of the 14 minorities who bought thrifts from RTC under the Minority Preference Resolutions Program purchased 4,063 residential mortgage loans during FY 1995 for $289.6 million; (4) the two valuation contractors priced mortgage loans using a methodology that considered adjustments in interest rates, credit risk, and was consistent with Freddie Mac's and Fannie Mae's pricing methodology; and (5) RTC did not adopt the loan pricing methodology proposed by minority acquirers who believed the mortgage loans were over priced, since it believed the existing methodology established a fair market value for the loans.
Labor, established as a Department in 1913, administers and enforces a variety of federal labor laws guaranteeing workers’ rights to a workplace free from safety and health hazards, a minimum hourly wage and overtime pay, family and medical leave, freedom from employment discrimination, and unemployment insurance. Labor also protects workers’ pension rights; provides for job training programs; helps workers find jobs; works to strengthen free collective bargaining; and keeps track of changes in employment, prices, and other national economic measures. Although Labor seeks to assist all Americans who need and want to work, special efforts are made to meet the unique job market problems of youths, older workers, economically disadvantaged and dislocated workers, and other groups. In fiscal year 1997, Labor has an estimated budget of $34.4 billion and is authorized 16,614 full-time-equivalent (FTE) staff-years. About three-fourths of Labor’s budget is composed of mandatory spending on income maintenance programs, such as the unemployment insurance program. The administration’s fiscal year 1998 budget request is $37.9 billion in budget authority and 17,143 FTE staff. The budget request includes $12 billion for Labor’s major budget themes—an increase of $1.7 billion over fiscal year 1997. Included in the request for fiscal year 1998 is $750 million in mandatory funding for a new welfare-to-work jobs program. Labor’s many program activities fall into two major categories: enhancing workers’ skills through job training and ensuring worker protection.Figure 1 shows the organizational structure of the Department. Labor’s workforce development responsibilities are housed in the Employment and Training Administration (ETA) and the Veterans’ Employment and Training Service. Together, they have a fiscal year 1997 budget of about $6.5 billion and 1,595 FTEs. Labor’s employment training programs include multiple programs authorized by the Job Training Partnership Act (JTPA), such as those for economically disadvantaged adults and youth and workers who lose their jobs because of plant closings or downsizing and Job Corps, an intensive residential program for severely disadvantaged youth. Table 1 shows Labor’s appropriations and staff-year spending for fiscal year 1997. Administration. Together, these units have 9,020 FTEs and a budget of $915 million for fiscal year 1997. uncertain how this fragmented system will be able to meet the employment demands of those affected by the recent welfare reform legislation. A major challenge for Labor is to facilitate workforce development within the context of a conglomeration of programs operated by Labor and 14 other federal departments and agencies. Table 2 shows the number of different employment training programs that existed in fiscal year 1995, their target groups, and fiscal year 1995 appropriations. For example, we found that 9 programs targeting economically disadvantaged individuals had similar goals; often served the same categories of people; and provided many of the same services using separate, often parallel, delivery structures. Congress could consider to reduce the deficit. Alternatively, the Congress could spend the same amount of money and serve more people. Further, consolidating similar employment training programs could result in improved opportunities to increase effectiveness in service delivery. For example, consolidating programs could improve the assistance provided to the target populations because individuals would be more likely to receive the mix of services needed to achieve training or placement goals. And, getting needed services might be less confusing and frustrating to clients, employers, and administrators. In anticipation of federal consolidation legislation, and to improve their local service delivery, many states are moving ahead with their own consolidation plans. Labor has engaged in several efforts to assist states in these consolidation efforts. For example, Labor has promoted the development of “one-stop career centers.” These centers are designed to transform an array of employment training programs into an integrated service delivery system for job-seekers and employers. Labor expects them to identify the jobs that are available, the skills they require, and the institutions that have proven track records of preparing people for new work. This information will probably be available largely through computer links. As of February 1996, 54 states and jurisdictions had received planning or implementation grants to establish one-stop centers. In addition, Labor and the Department of Education jointly administer the school-to-work program—a program designed to build integrated learning and employment opportunities for youth. The proposed fiscal year 1998 budget includes $200 million for each agency to ensure that “seed capital” grants to states and communities continue. data. In our review of 62 programs for which the economically disadvantaged individuals were eligible, we found that less than half of the programs obtained data on whether or not participants obtained jobs after they received services. To its credit, Labor has collected much basic information, including outcome data, on its major employment training programs, such as Job Corps and other programs funded under JTPA. It has also conducted some evaluations to assess the impact of its programs. However, our reviews have shown that existing performance measures and studies still do not provide the kind of information that would provide confidence that funds are being spent to the greatest advantage of participants. Our reviews of the Job Corps program illustrate some of the weaknesses in current data collection and evaluation efforts. Job Corps is a national employment training program that provides severely disadvantaged youth with comprehensive services, generally in a residential setting, at a cost of about $1 billion a year to serve about 66,000 participants. Job Corps has a list of performance measures on which the over 100 individual centers are ranked each year. Moreover, to demonstrate the effectiveness of Job Corps, Labor cites the positive results of a national impact study. We have raised questions, however, about how valuable the information from these sources is in determining whether the high costs are justified by program outcomes. Jobs Corps reported that, nationally, 59 percent of its students obtained jobs in fiscal year 1993. However, when we surveyed a sample of employers identified in Job Corps records, we were left with serious concerns about the validity of reported job placement information. Despite Job Corps placement verification procedures, we found that about 15 percent of the reported placements in our sample were potentially invalid. In addition, we found that about half of the jobs obtained by students from the sites we visited were low-skill jobs—such as fast food worker—unrelated to the training provided by Job Corps. Labor initiated a major impact evaluation of the Jobs Corps program. This study, the initial results of which are expected to be available in 1998, should be extremely useful to inform decisions about the future of the program. The passage of the recent welfare reform legislation is likely to have an impact on the structure and delivery of employment training programs at the state and local levels. Because of the work requirements imposed by that legislation, many individuals formerly on welfare will be needing job assistance and training services. The responsibility for service delivery lies with state and local offices, yet Labor has an important role because of its expertise and experience. Labor can encourage and facilitate, as appropriate, the integration of employment training services that may be required to meet the needs of the welfare population. How to serve those individuals transitioning from welfare to work, while at the same time meeting the service needs of dislocated workers and other client populations, is a challenge for Labor. Concerns have been raised about the availability of appropriate jobs, the level of training and skills required for jobs, the impact of competition for low-skilled jobs on the wages of low-skilled workers, and the extent to which the current employment training system can absorb and provide needed services to the expanded welfare population. In addition, it is critical that Labor and other agencies providing services consider the employment training needs of welfare clients in the process of providing job placement assistance. Our work on promising employment training practices shows that providing occupational skills alone is not the answer. Equally, or perhaps even more, important are employability skills—the ability not only to get a job but to keep a job.Concerns have been raised that in the rush to place welfare clients in jobs, if the appropriate mix of skills is not provided, many clients potentially will lose their jobs and go back on welfare. to monitor the situation and be responsive to the needs of states and localities as they transition individuals from welfare to work. For example, our work on identifying strategies used by successful employment training projects is the type of information that can be shared with states to assist their efforts. When we testified before this Subcommittee almost 2 years ago about the overall federal role in worker protection, we stressed the need for Labor to change its approach to one that was more service oriented and made more efficient use of agency resources. Some evidence exists that Labor has moved in that direction, especially in OSHA. But this change has not been without controversy, and further opportunities exist to develop alternative regulatory approaches. In addition to the overall need to consider alternatives to current regulatory approaches, Labor faces regulatory challenges in two specific areas: (1) redesigning the wage determination process under the Davis-Bacon Act and (2) as a result of recent legislative action, developing and enforcing regulations regarding portability of employer-provided health insurance. well as make enforcement less of a “gotcha” exercise and more one that recognizes good faith compliance efforts. These changes would also have the potential for improving the way limited agency resources are used for regulatory purposes. Changes in OSHA’s regulatory approach illustrate Labor’s action in this direction. In May 1995, the administration announced three regulatory reform initiatives to “enhance safety, trim paperwork, and transform OSHA.” This action was considered necessary because, despite OSHA’s efforts, the number of workplace injuries and illnesses was still too high, with over 6,000 workers dying each year from workplace injuries and 6 million suffering nonfatal workplace injuries. In addition, the administration acknowledged that the public saw OSHA as driven too often by numbers and rules, not by smart enforcement and results. The first initiative, the “New OSHA,” called for OSHA to change its fundamental operating paradigm from one of command and control to one that provides employers a real choice between partnership and a traditional enforcement relationship. The second initiative, “Common Sense Regulation,” called for a change in approach by identifying clear and sensible priorities, focusing on key building block rules, eliminating or updating and clarifying out-of-date and confusing standards, and emphasizing interaction with business and labor in the development of rules. The third initiative, “Results, Not Red Tape,” called for OSHA to change the way it works on a day-to-day basis by focusing on the most serious hazards and the most dangerous workplaces and by insisting on results instead of red tape. What data should be used to identify companies with high numbers of injuries (workers’ compensation claims, claims rates, or other data)? Has the effectiveness of the pilot effort been demonstrated well enough to extend it nationwide? Has the emphasis on partnerships been at the expense of effective enforcement actions against companies continuing to violate the standards? Further opportunities exist for OSHA to leverage its resources and demonstrate “smarter” enforcement. For example, in a recent study, we found that the federal government awarded $38 billion in federal contracts during fiscal year 1994 to at least 261 corporate parent companies with worksites where OSHA had proposed significant penalties for violations of safety and health regulations. We pointed out that agencies could use awarding federal contracts as a vehicle to encourage companies to improve workplace safety and health or—if companies refuse to improve working conditions—debar or suspend federal contractors for violation of safety and health regulations. One of our recommendations was that OSHA work with the General Services Administration and the Interagency Committee on Debarment and Suspension on policies and procedures regarding how safety and health records of federal contractors could be shared to help agency awarding and debarring officials in their decisionmaking. Labor recently told us that some discussions have occurred between OSHA and the Interagency Committee, but final decisions have not been reached on any new policies and procedures. prevailing wage rates that are, in fact, higher than those prevailing in the area—thus artificially inflating federal construction costs. Labor has acknowledged weaknesses in its wage determination process that call into question the integrity and accuracy of some of its wage determinations. For this reason, it requested funds to develop, evaluate, and implement alternative reliable methodologies or procedures that would yield accurate and timely wage determinations at a reasonable cost. Labor’s fiscal year 1997 budget request included $3.7 million for that purpose. The conference report accompanying the Department’s appropriation requested that we review these implementation activities to determine whether they will achieve their goals. We will do so and report our findings to the Appropriation Committees, as requested, when Labor has completed its work. Labor took some actions that we recommended in our May 1996 report as a short-term solution to reduce its vulnerability to the use of fraudulent or inaccurate data in the wage determination process. These actions, including increased verification of information provided by employers, will at least reduce some of the vulnerabilities of the existing process. The larger challenge facing Labor, however, is to examine and substantially improve the overall process. provisions will make it much easier for workers to change jobs and maintain health care coverage. And, according to Labor, millions more who have been unwilling to leave their job for a better one out of concern that they would lose their health care coverage would also benefit. The Congress set a very short timeframe for implementing these protections: Although the act was only signed into law on August 21, 1996, the regulations to carry out the portability provisions must be issued by April 1, 1997. Labor is working with the Department of Health and Human Services and the Treasury Department to meet that date because these provisions—called “shared provisions”—involve overlapping responsibilities of the three departments. In a statement before the Senate Committee on Labor and Human Resources in February of this year, the Assistant Secretary of Labor for PWBA said the three departments are “on track” to meet that goal. The regulations issued by April 1 will target the preexisting condition limitation and certification of previous health coverage portions of the portability provisions. The regulations will reflect comments received in response to a December notice in the Federal Register and will be fully effective when issued. Nevertheless, Labor intends to ask for public comments after they are issued and consider the need for any changes on the basis of the comments. Work will continue on other portions of the portability provisions after publication of the first set of regulations. performance goals, and (4) accurate and audited financial information about the costs of achieving mission outcomes. GPRA is aimed at improving program performance. It requires that agencies consult with the Congress and other stakeholders to clearly define their missions. It also requires that they establish long-term strategic goals, as well as annual goals linked to them. They must then measure their performance against the goals they have set and report publicly on how well they are doing. In addition to ongoing performance monitoring, agencies are expected to perform discrete evaluation studies of their programs, and to use information obtained from these evaluations to improve the programs. In moving toward an increased emphasis on program performance and results, Labor has begun developing an agencywide plan that describes its mission, goals, and objectives. According to the Office of Management and Budget (OMB), developing an overall mission and goals is a formidable challenge for Labor because of the diversity of the functions performed by its different offices. OMB officials have told us that the different offices in Labor have developed draft strategic plans that describe their respective goals and performance indicators. For example, ETA’s plan describes its mission, its strategies for achieving its employment training objectives, and the measures it will use to assess program outcomes. These plans were submitted to OMB with the Department’s most recent budget submission. Although Labor is not required to submit the strategic plans to the Congress and OMB until September 1997, this year’s early submission was used to obtain informal review and feedback on the draft plans. According to OMB, Labor is committed to developing a strategic approach that includes measurable outcomes. OMB’s review of Labor’s plans indicated that some parts of the Department are doing better than others, especially in identifying measures to assess results. At the same time, OMB recognizes that developing such measures may be more difficult for some offices than for others because of the differences in the specificity of goals and difficulty of quantifying some outcomes. According to Labor, it is continuing to make progress in meeting GPRA legislative mandates. Over the next few months, Labor officials will continue discussions with OMB as well as consultations with the Congress and the stakeholders. OSHA, as one of the GPRA pilot agencies, has been involved in a number of activities geared toward making the management improvements envisioned by the act. It has developed a draft strategic plan that identifies its performance goals and measures, and it has been working to develop a comprehensive performance measurement system that will focus on outcomes to measure its own effectiveness. OSHA and state representatives have discussed the application of this comprehensive system to OSHA’s monitoring of state safety and health programs. Although we have not reviewed the quality of OSHA’s performance measures, these types of planning and assessment efforts are consistent with those set out in GPRA to promote a results orientation in reviewing programs. This system, when fully implemented, will also be responsive to recommendations we made in a February 1994 report. Labor’s decentralized organizational structure makes adopting the better management practices described in GPRA quite challenging. Labor has 24 component offices or units, with over 1,000 field offices, to support its various functional responsibilities. Establishing departmental goals and monitoring outcome measures is a means by which the Department can ensure that its operations are working together toward achieving its mission. The CFO Act was designed to remedy decades of serious neglect in federal financial management operations and reporting. It created a foundation for improving federal financial management and accountability by establishing a financial management leadership structure and requirements for long-range planning, audited financial statements, and strengthened accountability reporting. The act created chief financial officer positions at each of the major agencies, most of which were to be filled by presidential appointment. Under the CFO Act, as expanded in 1994, Labor, as well as all other 23 major agencies, must prepare an annual financial statement, beginning in fiscal year 1996. Since 1986, Labor has produced audited departmentwide financial statements, thus complying with this requirement of the CFO Act. Producing audited financial statements that comply with the act involves obtaining an independent auditor’s opinion on the Department’s financial statements, report on the internal control structure, and report on compliance with laws and regulations. By meeting these requirements, Labor has been instilling accountability and oversight into its financial activities. Labor also has a chief financial officer, in compliance with the act. The Paperwork Reduction Act of 1995 is the overarching statute dealing with the acquisition and management of information resources by federal agencies. The Clinger-Cohen Act of 1996 reinforces this theme by elaborating on requirements that promote the use of information technology to better support agencies’ missions and to improve program performance. Among their many provisions are requirements that agencies set goals, measure performance, and report on progress in improving the efficiency and effectiveness of information management generally—and specifically, the acquisition and use of information technology. The Paperwork Reduction Act is based on the concept that information resources should support agency mission and performance. An information resources management plan should delineate what resources are needed, as well as how the agency plans to minimize the paperwork burden on the public and the cost to the government to collect the information. The Clinger-Cohen Act sets forth requirements for information technology investment to ensure that agencies have a system to prioritize investments. Clinger-Cohen also requires that a qualified senior-level chief information officer be appointed to guide all major information resource management activities. Labor has made some efforts to improve its information management systems; for example, it has appointed a chief information officer. OMB, in 1996, raised a question regarding this individual’s also serving as the Assistant Secretary for Administration and Management. The Clinger-Cohen Act requires that information resources management be the primary function of the chief information officer. Because it is unclear whether one individual can fulfill the responsibilities required by both positions, OMB has asked Labor to evaluate its approach and report back to OMB in a year. In past work, we have identified weaknesses in Labor’s information management practices. For example, our review of Labor’s field offices demonstrated the lack of centrally located information on key departmental functions, such as field office locations, staffing, and costs. We eventually identified 1,074 field offices, having constructed a profile of information about these field offices from information Labor provided. But constructing this profile was difficult. In response to our request for this information, Labor’s Office of the Assistant Secretary for Administration and Management queried the individual components and assembled a list of 1,037 field offices. We identified other offices using documents Labor provided, which brought the total to 1,056. When Labor reviewed a draft of the report, it amended the list again to add 18 more offices and bring the total to 1,074. Consequently, we had to report as a limitation of our findings that there was no assurance that all the information provided used consistent definitions and collection methods. In our report on Labor’s Davis-Bacon wage determination process, we also identified limited computer capabilities as a reason for the process’ vulnerability to use of fraudulent or inaccurate data. We found a lack of both computer software and hardware that could assist wage analysts in their reviews. For example, Labor offices did not have computer software that could detect grossly inaccurate data reported in Labor’s surveys to obtain wage data. And the hardware was so outdated that the computers had too little memory to store historical data on prior wage determinations, which would have allowed wage analysts to compare current data with prior recommendations for wage determinations in a given locality. not reconcilable to Job Corps contractor reports. As a result, there was insufficient accountability for Job Corps real property expenditures. This year, we added two new areas to our “high-risk” issues, both of which apply to Labor as well as to all other government agencies. The first area, information security, generally involves an agency’s ability to adequately protect information from unauthorized access. Ensuring information security is an ongoing challenge for Labor, especially given the sensitivity of some of the employee information being collected. The second area involves the need for computer systems to be changed to accommodate dates beyond the year 1999. This “year 2000” problem stems from the common practice of abbreviating years by their last two digits. Thus, miscalculations in all kinds of activities—such as benefit payments, for example—could occur because the computer system would interpret 00 as 1900 instead of 2000. Labor, along with other agencies that maintain temporal-based systems, is faced with the challenge of developing strategies to deal with this potential problem area in the near future. Labor’s programs touch the lives of nearly every American because of the Department’s responsibilities for employment training, job placement, and income security for workers when they are unemployed, as well as workplace conditions. Labor’s mission is an urgent one. Each day or week or year of unemployment or underemployment is one too many for individuals and their families. Every instance of a worker injured on the job or not paid legal wages is one that should not occur. Every employer frustrated in attempts to find competent workers or to understand and comply with complex or unclear regulations contributes to productivity losses our country can ill afford. And every dollar wasted in carrying out the Department’s mission is one we cannot afford to waste. Labor currently has a budget of about $34 billion and about 16,000 staff to carry out its program activities. Over the years, however, our work has questioned the effectiveness of these programs and called for more efficient use of these substantial resources. Like other agencies, Labor must focus more on the results of its activities and on obtaining the information it needs for a more focused, results-oriented management decision-making process. GPRA and the CFO, Paperwork Reduction, and Clinger-Cohen Acts give Labor the statutory framework it needs to manage for results. Labor has begun to improve its management practices in ways that are consistent with that legislation, but implementation is not yet far enough along for it to fully yield the benefits envisioned. We are hopeful that the changes Labor is making in its approach to management will help it better address the two challenges we have identified: developing employment skills through programs that meet the needs of a diverse workforce in the most cost-effective way and effectively ensuring the well-being of the nations’ workers while reducing the burden of providing that protection. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or Members of the Subcommittee might have. For more information on this testimony, call Harriet C. Ganson, Assistant Director, at (202) 512-9045. Joan Denomme and Jacqueline Harpp also contributed to this statement. Employment Training: Successful Projects Share Common Strategy (GAO/HEHS-96-108, May 7, 1996). Job Corps: High Costs and Mixed Results Raise Questions About Program’s Effectiveness (GAO/HEHS-95-180, June 30, 1995). Multiple Employment Training Programs: Information Crosswalk on 163 Employment Training Programs (GAO/HEHS-95-85FS, Feb. 14, 1995). Multiple Employment Training Programs: Major Overhaul Needed to Reduce Costs, Streamline the Bureaucracy, and Improve Results (GAO/T-HEHS-95-53, Jan. 10, 1995). OSHA: Potential to Reform Regulatory Enforcement (GAO/T-HEHS-96-42, Oct. 17, 1995). Davis-Bacon Act: Process Changes Could Raise Confidence That Wage Rates Are Based on Accurate Data (GAO/HEHS-96-130, May 31, 1996). Managing for Results: Using GPRA to Assist Congressional and Executive Branch Decisionmaking (GAO/T-GGD-97-43, Feb. 12, 1997). Information Technology Investment: Agencies Can Improve Performance, Reduce Costs, and Minimize Risks (GAO/AIMD-96-64, Sept. 30, 1996). Information Management Reform: Effective Implementation Is Essential for Improving Federal Performance (GAO/T-AIMD-96-132, July 17, 1996). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996). Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the major challenges Department of Labor faces in achieving its mission, focusing on: (1) Labor's efforts to provide effective employment and training programs that meet the diverse needs of its target populations in a cost-efficient manner; (2) Labor's efforts to ensure worker protection within a flexible regulatory structure; and (3) how Labor's ability to meet these challenges would be enhanced by the improved management envisioned by recent legislation. GAO noted that: (1) although Labor has historically been the focal point for workforce development activities, it faces the challenge of meeting those goals within the context of an uncoordinated system of multiple employment and training programs operated by numerous departments and agencies; (2) in fiscal year 1995, 163 federal employment training programs were spread across 15 departments and agencies (37 programs were in Labor), with a total budget of over $20.4 billion; (3) although GAO has not recounted the programs and appropriations, GAO is confident that the same problem exists; (4) rather than a coherent workforce development system, there is a patchwork of federal programs with similar goals, conflicting requirements, overlapping target populations, and questionable outcomes; (5) comprehensive legislation that would have addressed this fragmentation was considered but not passed by the 104th Congress; (6) in the absence of consolidation legislation, Labor has gone ahead with some reforms, such as planning grants for one-stop career centers, but the actions it has taken have not been enough to fix the problems; (7) passage of the recent welfare reform puts even greater demands on an employment training system that appears unprepared to respond; (8) a second major challenge for Labor is to develop regulatory strategies that ensure the well-being of the nations' workers in a less burdensome, more effective manner; (9) Labor has made some changes since GAO last testified, which are perhaps best illustrated by actions at the Occupational Safety and Health Administration (OSHA), such as its partnership initiatives with companies, but OSHA's actions have not been without controversy, and substantial challenges remain there and at other Labor components with worker protection responsibilities; (10) congressional action poses new challenges in the worker protection area as well; (11) Labor has committed to redesigning its Davis-Bacon wage determination process with additional funds appropriated by the Congress; (12) Labor also must issue and enforce regulations to implement the new health care portability law; and (13) in meeting these mission challenges, Labor will need to become more effective at managing its organization.
College students have several options for obtaining health insurance. They may obtain private health insurance through group market plans offered by employers, colleges, and other groups or through individual market plans. In addition, some college students may obtain coverage through public health insurance programs, such as Medicaid or the State Children’s Health Insurance Program (SCHIP). College students may obtain health insurance through employer- sponsored group market plans, which are plans employers offer to their employees and their dependents. Under these plans, employers typically subsidize a share of employees’ premiums for health insurance, and premiums are calculated based on the risk characteristics of the entire group. To offer health insurance, employers either purchase coverage from an insurance carrier or fund their own plans. All plans purchased from insurance carriers must meet state requirements, which vary by state. For example, some states require employer-sponsored plans purchased from insurance carriers to offer coverage to dependents. Although requirements for dependent coverage vary by state, plans have traditionally offered health insurance coverage for dependents through age 18, and have generally continued coverage for dependents through age 22 only if they attend college full-time. Under federal law, college students who have lost eligibility for dependent coverage under a parent’s employer-sponsored insurance plan may be able to use provisions in COBRA to continue their health insurance for a limited period of time. Specifically, COBRA allows individuals such as college students who have lost eligibility for dependent coverage the option of purchasing up to 36 months of continuation coverage under the employer-sponsored plan. COBRA does not require employers to pay for or subsidize this continuation coverage, which can appear expensive in contrast to the subsidized premiums that employees and their dependents may be accustomed to paying for employer-sponsored coverage. COBRA permits employers to charge 100 percent of the premium, plus an additional 2 percent administrative fee. College students may obtain health insurance through health insurance plans offered by other groups such as their college. Colleges offer health insurance plans to students because they have an interest in maintaining the health of their students and helping them achieve their educational objectives. These plans also can help students avoid high medical bills. To offer a health insurance plan, colleges either contract with an insurance carrier or fund their own plans. Unlike enrollees of employer-sponsored plans, those enrolled in student insurance plans typically pay the full premium for coverage. To make decisions about the plan’s eligibility criteria, benefits, and premiums, colleges typically convene a student health insurance committee, which generally includes college administrators, student health center administrators, and student representatives. These committees decide how the student insurance plan will coordinate with a college’s student health center, if one exists. College student health centers vary greatly in the services they provide—some offer limited services from one nurse, and others offer extensive services from multiple specialists. The committees may also consider college student insurance program standards issued by ACHA. Among other things, these standards suggest that colleges require students to have health insurance as a condition of enrollment, and that student insurance plans provide an appropriate level of benefits, including coverage of preventive services and mental health services and coverage for catastrophic illnesses or injuries. College students may also obtain health insurance through individual market plans, which are plans sold by insurance carriers to individuals who do not receive coverage through an employer, college, or other group. Because these plans are offered by insurance carriers, the plans must meet state requirements, including those regarding eligibility for dependent coverage. Individuals purchasing a health insurance plan in the individual market typically pay the full cost of their health care premium. Insurance carriers who sell plans in the individual market are typically allowed to review the health status of each individual applying for insurance. Unlike the employer-sponsored group market where premiums are based on the risk characteristics of the entire group, premiums for individual market coverage are based on factors associated with differences in each applicant’s expected health care costs, such as health status, age, and gender. Furthermore, applicants for individual market coverage may be rejected. Some college students may be able to obtain health insurance in the individual market as a result of protections established by HIPAA. Specifically, HIPAA protects eligible individuals, including college students who have exhausted COBRA continuation coverage, by requiring insurance carriers to offer individual market plans without a waiting period for coverage of preexisting conditions. HIPAA also protects eligible college students who were previously and continuously covered by a group market plan and are seeking coverage under a different group market plan. For these individuals, HIPAA requires insurance carriers to limit the use of waiting periods for coverage of preexisting conditions to no more than 12 months. In addition to private sources of health insurance, college students may obtain health insurance coverage through public health insurance programs, such as Medicaid or SCHIP. Some college students may have coverage through Medicaid, a joint federal-state program that finances health care coverage for certain low-income families, children, pregnant women, and individuals who are aged or disabled. Federal law requires states to extend Medicaid eligibility to children aged 6 through 18 in families with incomes at or below the federal poverty level. Some college students may have coverage through SCHIP, which provides health care coverage to low-income children through age 18 who live in families whose incomes exceed their state’s eligibility threshold for Medicaid and who do not have insurance through another source. This estimate is within plus or minus 133,000 of the population value at a 95 percent confidence level. Compared with college students aged 18 through 23, young adults not enrolled in college were more than twice as likely to be uninsured. Specifically, about 42 percent of nonstudents aged 18 through 23 were uninsured in 2006. Of the 1.7 million college students aged 18 through 23 who were uninsured in 2006, certain groups of students were more likely than others to be uninsured; and uninsured students incurred from $120 million to $255 million in uncompensated care for non-injury-related medical events in 2005. In particular, we found that part-time students, older students, and students from families with lower incomes were more likely than other groups of students to be uninsured in 2006. Of the 1.7 million college students aged 18 through 23 who were uninsured in 2006, certain groups of students—including part-time students, older students, and students from families with lower incomes—were more likely than others to be uninsured. According to our analysis of CPS data for 2006, we found that part-time students were more likely to be uninsured than were full-time students. Specifically, 31 percent of part- time students aged 18 through 23 were uninsured in 2006, compared with 18 percent of full-time students of the same age. In addition, CPS data show that older college students—those aged 22 and 23—were more likely to be uninsured in 2006 than younger students aged 18 through 21. Specifically, about 35 percent of college students aged 23 and 25 percent of college students aged 22 were uninsured in 2006, in comparison with 16 to 19 percent of college students aged 18 through 21 who were uninsured in 2006. (See fig. 2.) College students of certain racial and ethnic backgrounds—specifically, Hispanic, black, and Asian students—were more likely to be uninsured than white students in 2006, according to our analysis of CPS data. Specifically, we found that 38 percent of Hispanic, 29 percent of black, and 26 percent of Asian college students aged 18 through 23 were uninsured in 2006—in contrast with 15 percent of white college students of this age group who were uninsured in 2006. (See fig. 3.) These differences among uninsured college students from different racial and ethnic backgrounds are consistent with characteristics of the uninsured found in the general U.S. population. According to the U.S. Census Bureau, Hispanic, black, and Asian individuals were more likely to be uninsured in 2006 than were whites. In 2006, college students reporting lower family incomes were more likely to be uninsured than college students reporting higher incomes that year. Specifically, according to our analysis of CPS data, the average family income for uninsured college students aged 18 through 23 was about $52,000 in 2006, whereas the average family income for insured college students was significantly higher—about $95,000. This difference in income among uninsured college students is consistent with characteristics of the uninsured found in the general U.S. population. According to the U.S. Census Bureau, the likelihood of having health insurance rises with income. Based on our analysis of CPS data we also found that college students aged 18 through 23 from states in the West and South of the country were more likely to be uninsured in 2006 than students from states in the Northeast and the Midwest. Specifically, about 22 percent of college students aged 18 through 23 from states in the West and about 23 percent of students from states in the South were uninsured in 2006, whereas about 15 percent of college students from states in the Midwest and 18 percent of students from states in the Northeast were uninsured that year, according to our analysis of CPS data for 2006. These differences among uninsured college students from different regions are consistent with characteristics of the uninsured found in the general U.S. population. According to the U.S. Census Bureau, individuals from states in the South and West were more likely to be uninsured in 2006 than were individuals from states in the Midwest and Northeast. According to our analysis of MEPS data, uninsured college students incurred from $120 million to $255 million in uncompensated care for non-injury-related medical events during 2005. About 18 percent of uninsured college students aged 18 through 23 incurred uncompensated care in 2005, according to our analysis of MEPS data. Most of the charges for non-injury-related uncompensated care incurred in 2005 by uninsured college students were through visits to office-based providers and hospital emergency rooms. Uninsured college students may also incur uncompensated care for medical events related to injuries that our estimate does not reflect because we could not reliably determine the cost of this care. As a result, our estimate understates the total amount of uncompensated care incurred by uninsured college students in 2005. Over half of colleges nationwide offered health insurance plans to their students in the 2007-2008 academic year, and these plans were customized and benefits varied across plans. We found that 4-year public and private nonprofit colleges were more likely to offer student insurance plans than were 2-year public colleges. Colleges that offered student insurance plans often limited access to the plan for part-time students in an effort to maintain premium affordability and plan sustainability. Colleges customize their student health insurance plans to reflect their priorities in making insurance premiums affordable for students while still offering plans that meet the needs of students. Colleges’ student insurance plans are also customized to coordinate in a variety of ways with colleges’ on-campus student health centers. The benefits offered by student insurance plans varied in terms of the services they covered and the extent to which they paid for or limited payment for covered services. Over half of colleges nationwide offered health insurance plans to their students in the 2007-2008 academic year. Based on our review of a random, generalizable sample of 340 colleges, 194 colleges offered student insurance plans in the 2007-2008 academic year, and therefore we estimate that about 57 percent of colleges nationwide offered student insurance plans in the 2007-2008 academic year. The remaining 43 percent of colleges did not offer student insurance plans, though some of these colleges distributed information to their students on other sources of health insurance, such as plans sold in the individual market. Certain types of colleges were more likely than others to offer student insurance plans. In particular, 4-year public colleges were more likely to offer student insurance plans than other types of colleges in the 2007-2008 academic year. Based on our review of 340 colleges, we estimate that 82 percent of 4-year public colleges nationwide offered student insurance plans in the 2007-2008 academic year, compared with 71 percent of 4-year private, nonprofit colleges and 29 percent of 2-year public colleges. (See table 1.) Large colleges were more likely to offer student insurance plans than medium-sized colleges. Specifically, we estimate that 64 percent of large colleges (those with over 5,000 undergraduate students) nationwide offered student insurance plans in the 2007-2008 academic year, compared with 52 percent of medium-sized colleges (those with 1,501 to 5,000 undergraduate students). (See table 2.) No other differences in the percentage of colleges, by size, that offered student insurance plans were statistically significant. We also found that colleges offering the student insurance plans we reviewed varied in the extent that they made these plans available to their part-time students. Most—151 of the 165—colleges that made information available about part-time student eligibility for the plans we reviewed made their plans available to at least some part-time students in the 2007-2008 academic year. Specifically, about 30 percent of these colleges made the plans available to all their part-time students, and 61 percent made these plans available to some part-time students (see fig. 4). Colleges often limit eligibility for their health insurance plans for part-time students by setting a minimum number of credit hours that students must maintain in order to be eligible for the insurance. Of the eight colleges we reviewed in our case studies that had part-time students, seven colleges required students to enroll in a minimum number of credit hours before they could become eligible for their student insurance plans. Six of these seven colleges required students to enroll in six or more credits to establish eligibility for their student insurance plans. Colleges may limit part-time students’ eligibility for student insurance in an effort to maintain premium affordability and plan sustainability. According to college administrators we interviewed, colleges that allow all part-time students to access the student insurance plan may find that individuals with medical conditions associated with high costs will enroll in the college part-time in order to access the student insurance plan. This could, over time, drive up costs for all students on the plan, resulting in a plan that is less attractive to students and therefore more difficult to sustain. In an effort to avoid this scenario, colleges may limit part-time students’ access to the student insurance plan. For example, one college we reviewed discovered that some of its student insurance plan enrollees were senior citizens aged 60 to 70; these individuals registered in one- credit classes each semester to maintain eligibility for the insurance plan. In response, the college tightened eligibility requirements for the plan by allowing only degree-seeking students registered for a minimum number of credit hours to enroll. Student insurance plans were customized to reflect colleges’ priorities in making health insurance premiums affordable for their students while at the same time providing coverage that meets the needs of students. College students typically pay the full premium for student insurance plans, and college administrators we interviewed from most of the colleges in our case studies explained that maintaining premium affordability for their students is a priority. College administrators also told us that they want to provide coverage that meets the needs of their students so their students could avoid high medical bills and complete their college education. When designing their plans, college administrators vary in the extent to which they prioritize premium affordability over plan benefits. For example, some plans we reviewed charged relatively low annual premiums—$30 to $200. These plans also set relatively low limits on the amount they pay for covered services—with some having limits as low as $2,500 for each illness or injury. In contrast, other plans we reviewed charged relatively higher premiums, set higher limits on the amount they paid for covered services, and offered benefits such as preventive services and prescription drugs. While college student insurance plans offer varied combinations of premium levels and benefits, a college’s ability to offer a plan with a specific package of premiums and benefits is limited by several factors. For example, if a college receives federal financial assistance, its plan will be required to comply with applicable civil rights statutes. These requirements may affect the variability of certain benefits or premiums. These factors also include the plan’s historical claims experience—the amount the plan has paid in claims and the number of claims paid per enrollee—and projected enrollment in the plan. We found that student insurance plan premiums varied widely, reflecting the trade-offs colleges make in selecting plan premiums and benefits. Specifically, for the academic year 2007-2008 student insurance plans we reviewed that made information available about premiums, annual premiums for plans ranged from about $30 to about $2,400, and the average annual premium was about $850. Eighty-six of the 191 plans we reviewed (45 percent) had annual premiums from $500 to $999. (See fig. 5.) Student insurance plans were also customized to coordinate with the services available at a college’s student health center, if one exists. The student insurance plans we reviewed in our case studies varied in the ways that they coordinated with health centers. For example, we reviewed some student insurance plans in our case studies that covered certain services—for example prescription drugs—only at the required students to use the health center before seeking outside care in nonemergency situations; provided students with financial incentives—such as reduced enrollee cost sharing—to encourage use of the student health center instead of, or before, seeking care from other providers; or excluded services, such as preventive services, from coverage under the insurance plan when the services were provided free, or at low cost, to students at the health center. While some plans coordinated with student health centers in various ways, not all plans did so. In general, student insurance plans offered at colleges with student health centers that provide more services may be able to coordinate more with their health centers than can those offered at colleges with health centers that offer more limited services. According to insurance industry officials we interviewed, student insurance plans coordinate with student health centers to provide services to students, which can result in more affordable student insurance plan premiums. The benefits offered by student insurance plans varied in terms of the services they covered and the extent to which they paid for or limited payment for covered services. Although the student health insurance plans we reviewed in our case studies generally covered the same broad categories of services—including hospital inpatient and emergency services, physician’s office visits, mental health treatment, substance abuse treatment, and prescription drug coverage—the plans varied with respect to how they paid for or limited payment for services covered within these categories. Some plans offered by colleges we reviewed in our case studies limited coverage within the categories by excluding treatment for specific services, and according to college administrators we interviewed, this effort helps to keep premiums affordable. For example, some other plans we reviewed excluded coverage for services such as testing and treatment for allergies or treatment for injuries sustained as a result of attempted suicide or while under the influence of drugs or alcohol. Furthermore, according to our review of plans offered by colleges in our case studies, plans also varied in the extent to which they covered preventive services. One insurance industry official told us that student insurance plans may exclude coverage for preventive services because their plans are intended to cover treatment for illnesses and injuries—not for wellness—and because doing so helps to keep plan premiums affordable. The student insurance plans we reviewed also varied widely in the total amount—or maximum benefit—they would pay for all covered services. Nearly all (96 percent) of the 194 student insurance plans we reviewed established a maximum benefit amount, and most (68 percent) did so on a per condition per lifetime basis. Under this type of plan, payments are tallied for covered services treating each medical condition and the maximum benefit amount renews for each condition. The maximum benefit amounts for the 131 plans we reviewed that set a maximum benefit on a per condition per lifetime basis ranged from $2,500 to $1 million per condition per lifetime, with a median amount of $25,000. Sixty-nine of these 131 plans (over half) had a maximum benefit amount less than $30,000 per condition per lifetime, and 46 plans (35 percent) had a maximum benefit amount of $50,000 per condition per lifetime. In addition, 16 plans with a maximum benefit per condition per lifetime established a maximum benefit amount greater than $50,000 per condition per lifetime. (See fig. 6.) While most student insurance plans we reviewed established a maximum benefit amount per condition per lifetime, others set maximum benefit amounts on a per condition per year, per year, or per lifetime basis. The maximum benefit amounts set for these plans ranged from $2,500 per condition per year to $1 million per lifetime. Figure 7 shows the distribution of maximum benefit amounts, by type, for the student insurance plans we reviewed. In addition, four other plans we reviewed offered unlimited lifetime benefits. The student insurance plans we reviewed also varied in how they limited payment for and coverage of plan benefits. Some plans we reviewed established limits (known as internal benefit limits) on the maximum amount the plan would pay for a particular service or set of services. For example, one plan we reviewed limited coverage for ambulance services to $150 per condition per lifetime and another plan limited coverage of all outpatient benefits (including doctor visits, emergency room visits, X-rays, laboratory fees, radiation, and chemotherapy) to $1,200 per condition per lifetime. Some plans we reviewed also used internal benefit limits to constrain the number of visits covered for a particular service. For example, we reviewed plans that limited coverage for outpatient doctor’s visits to 10 visits per year or less. Some plans we reviewed set both internal benefit limits and required enrollees to share in the cost of covered services. Low internal benefit limits can make it highly unlikely for enrollees’ coverage to meet the plan’s maximum benefit amount. For example, one plan we reviewed in our case studies had a maximum benefit of $50,000 per condition per lifetime and an internal benefit limit of $1,200 per condition per lifetime for all outpatient benefits, including coverage for emergency services, diagnostic services, radiation, and chemotherapy. Under this plan, students who require extensive outpatient services to treat one condition (such as a chronic condition or serious illness like cancer) would be unlikely to ever meet the $50,000 per condition per lifetime maximum benefit amount. To increase the number of insured college students, colleges and states have taken a variety of steps, such as requiring students to have health insurance. According to our analysis, we estimate that about 30 percent of colleges nationwide required their students to have health insurance for the 2007-2008 academic year and some types of colleges—such as 4-year private nonprofit colleges—were more likely than others to have a health insurance requirement. Like colleges, some states and higher education governing boards have also required college students to have insurance. Colleges and states have also taken other steps to increase the number of insured college students. Specifically, colleges have jointly purchased health insurance through consortiums, and states have expanded dependents’ eligibility for private health insurance. In order to increase the number of insured college students, some colleges have required their students to have health insurance. Students attending these colleges generally must enroll in the college student insurance plan or present proof of coverage from another source. Based on our review of health insurance requirements at a random, generalizable sample of 340 colleges, we estimate that about 30 percent of colleges nationwide required all their full-time students to have health insurance for the 2007- 2008 academic year. In addition, 5 percent of colleges nationwide required some of their full-time students to have health insurance—for example, students living in dormitories or enrolled in certain degree programs. Some types of colleges were more likely than others to establish an insurance requirement. Specifically, 4-year private nonprofit colleges were most likely to require full-time students to have coverage, and 2-year public colleges were least likely to require it. Based on our review of 340 colleges, we estimate that 62 percent of 4-year private nonprofit colleges nationwide required all full-time students to have health insurance for the 2007-2008 academic year, whereas 22 percent of 4-year public colleges and 3 percent of 2-year public colleges had this requirement. (See table 3.) Small colleges were generally more likely to require full-time students to have health insurance than large colleges. We estimate that 40 percent of colleges nationwide with 1,500 or fewer undergraduate students required all full-time students to have health insurance for the 2007-2008 academic year, whereas 16 percent of colleges nationwide with over 5,000 students had such a requirement. (See table 4.) In addition to increasing the proportion of students who are insured, insurance industry officials we interviewed told us that colleges that implement a health insurance requirement are generally able to offer a plan with more comprehensive benefits or more affordable premiums. According to insurance industry officials, more students enroll in college student insurance plans when colleges require students to have health insurance than when colleges do not have such a requirement. Specifically, insurance industry officials we interviewed told us that from 15 to 40 percent of students enroll in student insurance plans offered by colleges that have a health insurance requirement, whereas less than 10 percent of students enroll in plans that are offered by colleges without such a requirement. In addition, students who enroll in plans offered by colleges with health insurance requirements generally are healthier than those who voluntarily enroll in plans offered by colleges without a requirement. Because larger and healthier populations typically enroll in student insurance plans offered by colleges with an insurance requirement, these colleges are generally able to offer plans with more comprehensive benefits or more affordable premiums than they would otherwise have been able to offer if they did not have such a requirement. For example, an administrator from one public college without a health insurance requirement estimated that implementing such a requirement could decrease student insurance plan premiums by as much as 50 percent. Although requiring students to have health insurance can increase the number of insured college students and allow the college to offer a more attractive plan, colleges face challenges implementing such a requirement. According to college administrators and insurance industry officials, college administrators face challenges implementing a requirement because doing so adds a fee to the total cost of attending college at a time when many are concerned with the rising cost of attending college. Administrators of public colleges are especially concerned about adding a new fee because, as one insurance industry representative noted, public colleges have lower annual tuition than private colleges, so the addition of a health insurance premium to student fees results in a larger percentage increase in the cost of attendance. According to college administrators we interviewed, some colleges compete for students based on cost of attendance and administrators of these colleges are concerned that implementing a health insurance requirement would give them a disadvantage in attracting students. Because of challenges in implementing a requirement at the college level, some college administrators and insurance industry officials would prefer to see such requirements established by a higher authority, such as a state or higher education governing board, or by a group of peer institutions in order to “level the playing field” for colleges. Some states and higher education governing boards have implemented health insurance requirements for college students. For example, Massachusetts and New Jersey require students attending college in their states to have health insurance as a condition of enrollment. Massachusetts, which implemented its requirement in 1989, requires all students enrolled in college for at least three-quarters of full-time status to either purchase a student insurance plan offered by their colleges or present proof of comparable coverage. In 1991, New Jersey also began requiring all full-time students attending college in the state to have health insurance. Similar to states’ requirements, some higher education governing boards, such as the Regents of the University of California and the Idaho State Board of Education, have also implemented health insurance requirements for college students within their respective state postsecondary educational systems. Some colleges have jointly purchased health insurance through consortiums, and this effort can increase the availability of health insurance for college students and the number of students who are insured. Consortiums are groups of colleges that join together to participate in or pool resources for a common goal, such as purchasing. Based on our review of published reports and student insurance plans and our interviews with insurance industry officials, we identified 37 consortiums—comprising over 500 colleges—that jointly purchased student health insurance plans in academic year 2007-2008. For example, California’s 109 community colleges are part of a consortium known as the Community College League of California, which jointly purchases student health insurance. In addition, the Wisconsin Association of Independent Colleges and Universities jointly purchases student health insurance for its 20 member colleges. We found that 32 percent of the 194 colleges we reviewed that offered student insurance plans in academic year 2007-2008 purchased their plans through the consortiums we identified.51, 52 Consortiums provide small colleges and 2-year public colleges with a way to purchase health insurance and offer it to their students when the colleges may otherwise be unable to do so. While 2-year public colleges were the least likely to offer student insurance plans for academic year 2007-2008, we found that 74 percent of the 2-year public colleges we reviewed that offered plans purchased them through the consortiums we identified. Some states have expanded dependents’ eligibility for private health insurance, and because most college students obtain health insurance as dependents, this effort has made health insurance more available to college students. Dependent coverage purchased through insurance carriers must meet state requirements regarding eligibility for this coverage. Although these requirements vary by state, plans have traditionally offered health insurance coverage for dependents through age 18, and have generally continued coverage for dependents through age 22 only if they attend college full-time. Our analysis of the proportion of colleges that participate in consortiums may underestimate the percentage of colleges that purchase student insurance through consortiums because our list of colleges participating in consortiums is not comprehensive. Most (48) of the 62 colleges we identified as purchasing health insurance through a consortium did not require students to have insurance. addition, some states have made dependent coverage available beyond age 18 for those who are not full-time students. By doing so, states have increased the availability of health insurance for part-time students and those who need to leave college for any reason. For example, in 2006 and 2007, New Jersey and Connecticut passed laws requiring that dependent coverage be available to certain state residents regardless of their student enrollment status up to ages 30 and 26, respectively. MICH. COMP. LAWS ANN. § 550.1409a (2007) (enacted by 2006 Mich. Pub. Acts 538 (effective 2007)). Michigan requires that dependent coverage be available for full-time and part-time college students for up to 12 months up to the age at which dependent coverage otherwise terminates when students take medical leaves of absence. VA. CODE ANN. § 38.2-3525 (2007) (enacted by 2007 Va. Acts ch. 428). Virginia requires that dependent coverage be available for college students for up to 12 months when students up to age 25 take medical leaves of absence. We provided a draft of this report to ACHA, an advocacy and leadership organization for college and university health. ACHA officials provided a technical comment, which we incorporated. We are sending copies of this report to interested congressional committees. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-7114 or at dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. We examined (1) the insurance status of college students, (2) characteristics of uninsured college students and the financial impact of this population on health care systems, (3) the extent to which colleges offered student insurance plans in the 2007-2008 academic year and the characteristics of available plans, and (4) efforts to increase the number of insured college students. Unless otherwise noted, all of our estimates on uninsured college students (from the Current Population Survey (CPS)) and uncompensated care (from the Medical Expenditure Panel Survey (MEPS)) are subject to a sampling error of plus or minus 5 percentage points, and all of our estimates on college student insurance plan availability and requirements (from our data collection) are subject to a sampling error of plus or minus 10 percentage points. We reviewed all data for reasonableness and consistency and determined that the data were sufficiently reliable for our purposes. We performed our work in accordance with generally accepted government auditing standards from May 2007 through March 2008. To describe the insurance status of college students, we analyzed data from the 2007 Annual Social and Economic Supplement to CPS, conducted by the U.S. Census Bureau for the Bureau of Labor Statistics. CPS is designed to represent a cross section of the nation’s civilian noninstitutionalized population. In 2007, about 83,200 households were included in the sample for the Annual Social and Economic Supplement, and the total response rate was about 83 percent. The supplement gathers information about the type of health insurance coverage that respondents had at any time during the previous calendar year, including private health insurance, such as coverage provided through an employer (employer- sponsored plans) and insurance directly purchased by the beneficiary (including college student insurance plans and individual market plans), as well as through public health insurance programs, such as Medicaid. The supplement also gathers data on demographic characteristics. The 2007 Annual Social and Economic Supplement to CPS asked about health insurance coverage during 2006. Specifically, the survey asked whether a respondent was covered by health insurance in the last year, and whether individuals had insurance “in their own name” or as dependents of other policyholders. To identify college students, we focused our analysis on individuals aged 18 through 23 who reported being students and we excluded those who already had a bachelor’s degree, master’s degree, professional school degree, or doctoral degree. To assess the reliability of 2007 Annual Social and Economic Supplement to CPS data, we (1) reviewed existing documentation related to the data sources, (2) electronically tested the data to identify obvious problems with completeness or accuracy, and (3) compared our results to published sources. Based on these reviews, we determined that the data were sufficiently reliable for the purposes of this report. Unless otherwise noted, all of our estimates are within plus or minus 5 percentage points of the population value at a 95 percent confidence level. To describe the characteristics of uninsured college students and the financial impact of this population on health care systems, we analyzed data from CPS and the Department of Health and Human Services’ 2005 MEPS. To describe the demographic characteristics of college students more likely to be uninsured, we analyzed data from the 2007 Annual Social and Economic Supplement to CPS on the demographic characteristics (age, race and ethnicity, family income, and region) and enrollment status of college students aged 18 through 23 who were insured and uninsured in 2006. Unless otherwise noted, our findings regarding the demographic characteristics of college students who were more likely to be uninsured in 2006 are significant at a 95 percent confidence level and all of our estimates are within plus or minus 5 percentage points of the population value at a 95 percent confidence level. To describe the financial impact of uninsured college students on health care systems, we estimated the amount of non-injury-related uncompensated care incurred by uninsured college students by analyzing 2005 MEPS data, which was the most recently available data at the time we did our work. MEPS is designed to provide nationally representative data on health care use and expenditures of U.S. civilian noninstitutionalized individuals. We used data from the MEPS Household Component. This longitudinal survey collects information on health insurance coverage and use of health care services of individuals over a 2-½-year period. The survey also gathers data on the demographic characteristics of respondents. In 2005, the Household Component surveyed 12,810 families representing 32,320 individuals, and had a total response rate of about 61 percent. The Household Component collects information directly from medical providers, such as hospitals, physicians, and pharmacies, to validate self-reported information provided by survey respondents. We analyzed Household Component data to identify college students who were uninsured for all or part of 2005. Specifically, to identify college students, we included individuals aged 18 through 23 who reported being students at any time during 2005. Of these individuals, we included those who reported having a high school diploma, GED, other degree or who had completed 11 or more years of education at the time they began the Household Component survey. We excluded those who reported having a bachelor’s degree, master’s degree, or doctoral degree. We identified college students as uninsured if they reported being uninsured for any part of 2005. We estimated the amount of non-injury-related uncompensated care incurred by these uninsured college students only during the time they were uninsured. Uninsured college students may also incur uncompensated care for medical events related to injuries. However, our estimate does not include uncompensated care for injury-related medical events because we could not reliably estimate the cost of this care. Our estimate also does not include medical care provided to uninsured college students for which a partial payment was made. As a result of these limitations, our estimate understates the total amount of uncompensated care incurred by uninsured college students in 2005. To assess the reliability of the MEPS data, we reviewed existing documentation related to the data source, and electronically tested the data to identify obvious problems with completeness or accuracy. Based on these reviews, we determined that the data were sufficiently reliable for the purposes of this report. Unless otherwise noted, all of our estimates are within plus or minus 5 percentage points of the population value at a 95 percent confidence level. To describe the extent to which colleges offered student insurance plans and the characteristics of available plans, we collected data on student insurance plans offered at a random sample of 340 colleges. We also obtained more detailed information about the characteristics of plans offered by 10 colleges by conducting case studies through which we interviewed college administrators and reviewed plan documents. We also interviewed experts as well as representatives from eight insurance industry companies to obtain information on plan availability and characteristics. To provide context, we also summarized characteristics of employer-sponsored and individual market policies as reported by two 2006 national employer health benefits surveys and in a 2007 national individual market survey. We collected information on the availability and characteristics of college student health insurance plans at a random sample of 340 colleges. We drew the sample of colleges from the Department of Educations’ Integrated Postsecondary Education Data System (IPEDS), which contains the most comprehensive data on all postsecondary institutions. The sample consisted of active 2-year public, 4-year public, and 4-year private nonprofit colleges that in 2005 had undergraduate enrollment of at least 200 and participated in federal student financial aid programs, such as grant and loan programs, authorized by Title IV of the Higher Education Act of 1965. We drew a stratified random sample of 340 colleges from the population of 2,805 colleges that met these criteria. We grouped colleges into one of three categories based on the size of their undergraduate student population: small (200 to 1,500 students), medium-sized (1,501 to 5,000 students), and large (5,001 or more students). We selected our sample from nine strata defined by size of undergraduate student enrollment (small, medium, and large) and institution type (2-year public, 4-year public, and 4-year private nonprofit). Each college had a known probability of being selected. The population and sample by strata are shown in table 5. To assess the completeness of the IPEDS data we used to generate our sample, we reviewed technical documentation and we performed electronic tests to look for missing or out-of-range values. On the basis of these reviews and tests, we found the IPEDS data sufficiently reliable for the purpose of generating a sample of colleges. To gather information on the availability of health insurance plans for college students during the 2007-2008 academic year and the characteristics of available plans, we reviewed the Web sites of or spoke to officials at each of the 340 colleges in our sample. Colleges vary in the amount of information they post to their Web sites about their student health insurance plans and insurance policies. We reviewed information from each college’s student health center, student services, or student affairs Web site. If information was not readily available from these Web sites, we searched each college’s student handbook and general college Web site. For those colleges that did not have Web sites or for which we could not find information about the college’s student health insurance plan or policies for academic year 2007-2008, we spoke with college officials and asked structured questions to gather this information. We reviewed information relevant to degree-seeking domestic undergraduate students only. For each of the 340 colleges in our sample, we gathered information for academic year 2007-2008 on (1) whether the college required full-time degree-seeking students to have health insurance; (2) whether the college offered a student insurance plan; and if yes, (3) whether part-time undergraduate students were allowed to enroll in the plan; (4) the plan’s premium; and (5) the plan’s maximum benefit amount. We determined that a college offered a student insurance plan when the college’s Web site or a college official identified a health insurance plan that was specifically intended for the college’s students. When gathering information on maximum benefit amounts, we did not consider additional coverage that college students may purchase if they enroll in the college’s basic plan because, according to insurance industry officials we interviewed, a small portion of college students—generally less than 5 percent of those enrolled in the basic plan—purchase this supplemental coverage when it is offered. When insurance plan premiums were based on student age or enrollment status, we gathered information on premiums for full-time students aged 18 through 23. We obtained responses about the availability of college student insurance from 100 percent of the colleges in our sample. However, we were unable to gather complete information about the characteristics of all student insurance plans offered by the colleges in our sample. Of the 194 plans offered by colleges in our sample, we gathered information on part-time student eligibility for 165 plans and premium information for 191 plans. We collected information on the maximum benefits established for all 194 plans. Of these 194 plans, 186 plans established a maximum benefit amount, and 5 of these 186 plans established two amounts. We therefore reviewed and report on the 191 maximum benefit amounts used for 186 plans. Using an electronic data collection instrument, we extracted data on the availability and characteristics of college student insurance plans from documentation of student health insurance plan offerings from August 2007 through October 2007 for a sample of 340 colleges. To ensure the accuracy of the extracted data, we performed a verification audit for more than 25 percent of the electronic records by comparing them to the source documentation, and we corrected the errors we found. During our analysis, we electronically tested the data for reasonableness, including testing for out-of-range values and statistical outliers. Analysis programs were also independently verified. Based on these reviews, we determined that the data were sufficiently reliable for the purposes of this report. We weighted each sampled college in our analysis to represent all colleges in the population, which allowed us to generalize our results on the availability of student insurance plans to the population of U.S. colleges, according to the types and sizes of colleges sampled. Because our sample was of colleges, and not student insurance plans, our findings on the characteristics of these plans are not generalizable to the universe of student insurance plans offered nationwide. All of our estimates on college student insurance plan availability are within plus or minus 10 percentage points of the population value at a 95 percent confidence level. In addition, unless otherwise noted, our results on the types of colleges more likely to offer student insurance plans are significant at a 95 percent confidence level. To obtain detailed information on the characteristics of student insurance plans, we conducted case studies of 10 colleges’ experiences offering health insurance to their students. Through these case studies we reviewed the plans offered by 10 colleges and interviewed officials from each of these colleges. We judgmentally selected colleges for our case studies that represented a range of three characteristics—college type (private nonprofit or public), health insurance requirement (presence or absence of a requirement), and companies involved in insuring and administering the plan. The colleges we selected for review were Duke University, Colorado State University, Ohio State University, Princeton University, Santa Rosa Junior College, University of Colorado at Boulder, University of Georgia, University of Minnesota Twin Cities, University of Utah, and Washington University in St. Louis. Together, these 10 colleges comprised 3 private 4-year colleges, 6 public 4-year colleges, and 1 public 2-year college. Six of these colleges required full-time students to have coverage and 4 did not. These 10 colleges worked with a total of 16 insurance industry companies to insure their students and administer their student insurance plans. We reviewed plan and policy documentation available for each college plan, and interviewed officials from each college regarding their college student insurance plan and policies. The results of this review were used to gain contextual information and provide detailed illustrations that are neither representative of all plans nor generalizable to all colleges offering student health insurance plans. To obtain detailed information about the factors that affect student insurance plan characteristics, we interviewed officials from eight insurance industry companies serving the college student insurance market. We judgmentally selected these companies for interview based on information we received from experts and college administrators. Specifically, we interviewed officials from The Chickering Group (a subsidiary of Aetna) and UnitedHealthcare StudentResources, as well as officials from Blue Cross Blue Shield of North Carolina, Blue Cross Blue Shield of Massachusetts, Koster Insurance, The Maksin Group, Student Assurance Services Incorporated, and Wells Fargo Insurance Services. To describe efforts to increase the number of insured college students, we reviewed published reports and conducted interviews with insurance industry officials about efforts that would either increase the number of insured college students aged 18 through 23 or increase the availability of insurance for all or most college students aged 18 through 23. In addition, to estimate the number of colleges with health insurance requirements, we collected information on the health insurance requirements at a random sample of 340 colleges. The sampling methods we used enabled us to generalize our results regarding health insurance requirements at colleges to the population of U.S. colleges, according to the types and sizes of colleges sampled. Unless otherwise noted, our findings regarding the types of colleges that are more likely to have a requirement are significant at a 95 percent confidence level, and all of our estimates are within plus or minus 10 percentage points of the population value at a 95 percent confidence level. As noted above, we took multiple steps to ensure the accuracy of the data we collected on the health insurance requirements at a sample of 340 colleges. Based on these reviews, we determined that the data were sufficiently reliable for the purposes of this report. To describe health insurance requirements implemented by states and higher education governing boards, we reviewed relevant state laws and higher education governing board policies. To describe colleges’ efforts to jointly purchase student insurance plans through consortiums, we developed a list of consortiums based on insurance industry officials’ knowledge of consortiums and our review of student insurance plans, and we identified which of the 194 colleges in our review that offered student insurance plans did so through a consortium on our list. Because our list of colleges participating in consortiums is not comprehensive, we may underestimate the percentage of colleges that purchase student insurance through consortiums. Finally, to describe efforts to expand dependents’ eligibility for private health insurance, we reviewed states’ laws. In addition to the individual named above, Kristi Peterson, Assistant Director; Krister Friday; Christopher Howard; Emily Larson; Lisa Motley; Dan Ries; Patricia Roy; and Suzanne Worth made key contributions to this report.
College students face challenges obtaining health insurance--they may not have access to insurance through an employer, and as they get older, they may lose dependent coverage obtained through a parent's plan. Federal law ensures continued access to health insurance for some, but not all, such students. Without health insurance, college students may be unable to pay for their health care, and the cost of this care may be passed on to federal and state payers, such as Medicaid. College students may have access to student insurance plans offered by their colleges. GAO was asked to report on uninsured college students, student insurance plans, and efforts to increase the number of insured students. GAO reviewed (1) college students' insurance status, (2) uninsured college students' characteristics, (3) the extent to which colleges offered student insurance plans and the characteristics of available plans, and (4) efforts to increase the number of insured students. GAO analyzed data from a national survey on college students' insurance status and uninsured college students' characteristics. GAO collected data from 340 colleges on the availability of student insurance plans and the characteristics of available plans, and also gathered detailed plan information from case studies of 10 colleges and interviews with experts and insurance industry officials. GAO also reviewed some states' laws. About 80 percent of college students aged 18 through 23 had health insurance in 2006. While 67 percent of college students were covered through employer-sponsored plans, 7 percent were covered through other private health insurance plans, such as student insurance plans, and 6 percent were covered by public programs, such as Medicaid. Most insured students were covered, for example, as a dependent, on a policy under another person's name. About 20 percent of college students aged 18 through 23 (1.7 million) were uninsured in 2006, and certain groups of students--such as part-time students, nonwhite students, and students from families with lower incomes--were more likely than others to be uninsured. The characteristics of uninsured students are consistent with those of the uninsured found in the general U.S. population. Over half of colleges nationwide offered student insurance plans in the 2007-2008 academic year, and plans' benefits varied. Colleges customized their plans to reflect their priorities in making premiums affordable for students while providing coverage that meets students' needs. The plans GAO reviewed varied in the services they covered and how they paid for covered services. Specifically, some plans excluded preventive services from coverage and some plans limited payment for benefits such as prescription drugs. In addition, plans also varied in terms of premiums and maximum benefits, with annual premiums ranging from $30 to $2,400 and maximum benefits ranging from $2,500 for each illness or injury to unlimited lifetime coverage. Colleges and states have taken a variety of steps to increase the number of insured college students. For example, GAO estimated that about 30 percent of colleges nationwide required students to have health insurance in academic year 2007-2008, and some states also have health insurance requirements for college students. Finally, some states have expanded dependents' eligibility for private health insurance, which makes insurance more available to college students who obtain coverage as dependents. Officials from the American College Health Association (ACHA)--an advocacy and leadership organization for college and university health--provided a technical comment, which we incorporated.
The Apache Longbow helicopter is designed to conduct precision attacks in adverse weather and on battlefields obscured by smoke, automatically engage multiple targets, and provide fire-and-forget missile capability. The Apache Longbow configuration consists of a modified airframe, a fire control radar, and a new Longbow (radio frequency) Hellfire missile. The Army plans to upgrade the entire fleet of 758 Apache helicopters to the Apache Longbow configuration but outfit only 227 with the radar and a more powerful 701C engine. The remaining 531 non-radar-equipped Apache Longbows will be equipped with the less powerful 701 engine, even though they will be reconfigured to accept the radar and upgraded 701C engine. In its fiscal year 2000-2005 program plan, the Army has proposed a reduction in the number of Apaches that will be converted to the Apache Longbow configuration. The April 1994 Apache Longbow’s operational requirements document (ORD) prescribes performance capabilities required for the system’s survivability and lethality. These capabilities include meeting the vertical flight requirement, carrying the Longbow Hellfire missile, and passing target data when in line of sight and not in the line of sight. For the Apache Longbow, the Army has identified performance objectives (desired capabilities) and performance thresholds (minimum capabilities). The Army designated selected thresholds as key performance parameters. According to the Department of Defense’s (DOD) acquisition guidelines, key performance parameters are those capabilities that are so significant that failure to meet the threshold can be a cause for the program to be reassessed or terminated. The Apache Longbow ORD prescribes that, for survivability in the combat mission configuration, the system is required to achieve a VROC of at least 450 feet per minute at 4,000 feet and 95 degrees Fahrenheit while carrying 4 air-to-air missiles, 8 Hellfire missiles (4 semiactive laser Hellfire missiles and 4 Longbow Hellfire missiles), 320 rounds of 30-millimeter ammunition, and a full fuel load. VROC indicates the helicopter’s ability to climb vertically from a hover position and its ability to conduct lateral maneuvers. Both lateral and vertical acceleration provide the agility a helicopter needs to extricate itself from threatening situations. In October 1994, the Joint Requirements Oversight Council validated the ORD’s VROC requirement of 450 feet per minute as a key performance parameter. The Council also made 12 Longbow Hellfire missiles a key performance parameter, replacing the ORD’s combat mission requirement for 8 Hellfire missiles. In November 1994, the Army directed the Training and Doctrine Command’s Apache Longbow system manager and the Program Executive Officer for Aviation to update the ORD to reflect the changed requirement. The Apache Longbow ORD and contract reflect the VROC requirement but not the revised Hellfire requirement. The ORD describes non-line-of-sight communications capability as a critical system performance objective, but not a key performance parameter, of the Apache Longbow helicopter. The non-line-of-sight radio gives the radar- and non-radar-equipped Apache Longbow helicopters the ability to transfer targeting data when not in direct line of sight. Both the design and use of the fire control radar depend on the ability of the radar-equipped Apache Longbow to utilize terrain and vegetation for concealment, rise above a tree line or hill to acquire target data, return to a concealed position to transfer the target data to another Apache Longbow, and fire the Longbow Hellfire missile. The Army plans to use the ARC-220 radio to meet this requirement. The 227 radar-equipped Apache Longbows will not be able to achieve the combat mission VROC requirement of 450 feet per minute when carrying 12 missiles with a full fuel load. Thus, the system’s survivability will be adversely impacted. The contractor reports that, in the combat mission configuration, the Apache Longbow weighs 16,535 pounds after burning off 1,084 pounds of fuel. At this weight, the contractor reports that the Apache Longbow can achieve a VROC of 895 feet per minute, exceeding the required 450 feet per minute. From Army and contractor records, we identified those items that would have to be added to the helicopter to meet the ORD’s combat mission requirement. When the reported Apache Longbow weight of 16,535 pounds is increased by the fuel burn off weight of 1,084 pounds to meet the ORD’s full fuel load requirement, the helicopter’s weight is 17,619 pounds. When the contractor’s reported weight is increased by the weight associated with meeting the Hellfire missile requirement of 12 instead of 8 (430 pounds), the necessary launcher and pylon to carry them (207 pounds), and a full fuel load (1,084 pounds), we determined that the weight of the Apache Longbow would be about 18,256 pounds. According to Army engineers, an increase in weight of one pound causes a corresponding decrease in VROC of 0.839 feet per minute. With an increase in weight of either 1,084 or 1,721 pounds, the Apache Longbow would be incapable of meeting the validated VROC requirement of 450 feet per minute at 4,000 feet and 95 degrees Fahrenheit. To achieve the validated VROC requirement of 450 feet per minute and carry the required 12 Hellfire missiles, aircraft weight must be reduced. Since the Apache Longbow’s 701C engine is operating at 100-percent maximum-rated power in the combat mission configuration when VROC is measured, no reserve engine power is available. In describing the Apache Longbow’s ability to meet the VROC while carrying the 12 Hellfire missiles, the Army stated, in its November 1995 acquisition program baseline, that the helicopter can only achieve the VROC requirement by reducing weight, such as ordnance and/or fuel load. According to Army officials, reduced VROC performance will decrease the helicopter’s ability to evade enemy fire, thereby decreasing survivability. Also, if the mission ordnance load is reduced to lower weight and, therefore, achieve desired VROC, lethality will be decreased because less ammunition and/or fewer missiles will be available for use against enemy targets. If the mission fuel load is reduced for the same purpose, mission range and/or loiter time will be decreased. On the basis of the Army’s planned system enhancements, the contractor expects the Apache Longbow’s weight to increase by another approximately 1,000 pounds when existing requirements, such as improved avionics equipment, the non-line-of-sight radio, and fixes for systemic problems (including a new transmission and main gear box) are added to the helicopter. Also, based on new requirements, the contractor projects that weight will increase by an additional 500 pounds for items, such as, sensor improvements, a redesigned rotor system, an advanced weapon suite, and improved crew seats. With the additional 1,500 pounds, the Army will be further challenged to find ways to meet the Apache Longbow’s VROC requirements. The Apache Longbow ORD also requires that the 531 non-radar-equipped helicopters have a VROC equal to or greater than the radar-equipped aircraft to ensure that combat effectiveness is maintained. The non-radar-equipped helicopter has a less powerful engine, and the contractor reports that this helicopter has significantly less VROC capability than the radar-equipped helicopter. To improve VROC and corresponding maneuverability on non-radar-equipped aircraft, the Army plans to upgrade the 701 engines on these aircraft to the more powerful 701C engines. According to the Army, this upgrade will cost about $1.1 million per aircraft, or about $600 million for 531 helicopters. This requirement is included in the Army’s future funding plans. The additional power provided by the 701C engines may not provide the lift capability the non-radar-equipped Apache Longbow will need for the combat mission. Removing the radar will decrease weight by about 450 pounds. However, fuel and missile load requirements for the combat mission will increase weight by about 1,721 pounds. The incremental increase of 1,271 pounds would have an adverse impact on the non-radar-equipped Apache Longbow’s already limited VROC performance. At initial operational capability in October 1998, the Apache Longbow will not be able to meet the requirement to transfer target data to other helicopters when out of line of sight, as required. The Army plans to provide this capability through the ARC-220 radio but because of funding and developmental problems, it does not know when this required capability will be available. The ORD requires that all Apache Longbow helicopters be able to transmit, receive, and coordinate battlefield information. The Apache Longbow must interface with existing and planned Army command, control, communications, and intelligence systems. The communications system must support the transfer of mission data from ground units to aircraft, aircraft to aircraft, and aircraft to ground units. This communications capability requires airborne and ground non-line-of-sight communications. As of May 1998, unresolved technical issues, including the amount and severity of electrical interference generated, have affected the radio’s development. The ARC-220 Army project manager did not know when radio delivery would begin. The Army plans to address this and other concerns with additional testing; however, the Army does not currently plan to start testing the ARC-220 radio in the Apache Longbow until fiscal year 2000. According to the ARC-220 project manager, no other radio can provide the non-line-of-sight communications capability for the Apache Longbow. Also, the Army has decided to equip only one-half, or 379, rather than all 758 helicopters with the ARC-220 radio due to changing Army funding priorities. Therefore, 50 percent of the Apache Longbow fleet will be unable to transfer or receive targeting data when out of the line of sight. The 50-percent reduction in planned radio procurement quantities will result in decreased lethality of the Apache Longbow fleet due to the inability to transfer target data between Apache Longbow helicopters. Also, the fleet’s survivability will be decreased because of the helicopter’s greater exposure to hostile forces. The Army’s 227 radar-equipped Apache Longbow helicopters will be too heavy to achieve the validated VROC requirement of 450 feet per minute in the combat mission configuration when carrying a full fuel load and 12 missiles. According to the ORD, if the VROC requirement is not met, the helicopters will not have acceptable levels of maneuverability and agility to successfully operate in combat. Army plans to modify the system will add weight and therefore exacerbate this problem. The impact of increased weight on the ability of non-radar-equipped Apache Longbow helicopters to achieve VROC performance requirements is even greater because of their less-powerful engines. At initial operational capability, the Apache Longbow will not have a radio that will allow it to transfer target data between helicopters when concealed or not in the line of sight. Unresolved technical issues have delayed the radio’s development. More importantly, the Army plans to install the non-line-of-sight radio on only one-half of the total Apache Longbow helicopter fleet. The 50-percent reduction in planned procurement quantities will result in decreased lethality of the Apache Longbow fleet due to the inability to transfer target data between Apache Longbow helicopters. Also, the fleet’s survivability will be decreased because of the helicopter’s greater exposure to hostile forces. We recommend that the Secretary of Defense reassess the Apache Longbow program to determine whether its performance capabilities will be sufficient to meet its critical warfighting missions. In written comments on a draft of this report, DOD partially concurred with the findings but nonconcurred with the recommendation. DOD’s comments are reprinted in their entirety in appendix I, along with our evaluation of them. In disagreeing with our recommendation, DOD contends that past analyses have shown that the Apache Longbow, can meet its performance requirements and, therefore, it can meet its critical warfighting missions. DOD believes there is no need to repeat these analyses. However, it noted that it plans to reassess the program as specified in the full-rate production Acquisition Decision Memorandum. The Army has identified VROC and Hellfire missile load among the most critical Apache Longbow performance characteristics—key performance parameters. While the Apache Longbow may have met performance requirements in earlier analyses, it does not currently meet the VROC and missile load key performance parameters required to execute its combat and primary missions. DOD Regulation 5000.2 clearly defines the importance of key performance parameters as those capabilities or characteristics so significant that failure to meet them can be cause for the program to be reassessed or terminated. The Acquisition Decision Memorandum requires that the program manager evaluate cost, schedule, and performance tradeoffs to minimize the cost of ownership; it does not require a fundamental reassessment of the program, as we are recommending. Therefore, based on the issues raised in this report and DOD’s guidance, we disagree with DOD’s position on our recommendation and continue to maintain that the Apache Longbow program should be reassessed. To determine whether Apache Longbow performance requirements and operational capabilities, including the ability to transfer data when not in the line of sight, will be met, we interviewed cognizant officials and reviewed relevant Army and DOD documents related to the development and acquisition of the Apache Longbow. These documents include Defense Acquisition Executive Summaries, the Apache Longbow’s ORD and Acquisition Program Baseline, key performance parameters, system specifications, Selected Acquisition Reports, and the Acquisition Decision Memorandum. In addition, we reviewed contractor data, such as project progress reviews, and selected documents related to the original Apache helicopter. To calculate aircraft weights, we used the weights shown in the Weight and Balance Reports prepared by the contractor after the actual weighing of each remanufactured aircraft. The Army uses these weights in accepting aircraft, and they are the basis for all subsequent modifications to each helicopter. We did not independently verify these weights. We calculated VROC utilizing accepted factors and methodologies provided by engineers from the Army’s Aviation Research, Development, and Engineering Center. We also used data from these officials illustrating how various factors, such as weight, altitude, temperature, and flight duration, affect helicopter performance under different mission scenarios. In addition, we received information from these officials on power requirements, velocities, and fuel consumption rates that supported our calculations of VROC. We discussed our methodology with Army engineering officials, and they agreed that it would provide a basis for evaluating the impact of weight increases on VROC. We conducted our work at the Program Office for Aviation, the Apache Attack Helicopter Project Management Office, and the Office of the Executive Director for Aviation Research, Development, and Engineering Center at the Army’s Aviation and Missile Command, Huntsville, Alabama; the Joint Chiefs of Staff, Washington, D.C.; the Office of the Assistant Secretary of the Army for Research, Development, and Acquisition, Washington, D.C.; the U.S. Army Office of the Deputy Chief of Staff for Operations and Plans, Washington, D.C.; and the Army’s Training and Doctrine Command, Fort Rucker, Alabama. In addition, we interviewed officials at the Boeing Company and Defense Contract Management Command in Mesa, Arizona. We conducted our review from January to June 1998 in accordance with generally accepted government auditing standards. As you know, the head of a federal agency is required by 31 U.S.C. 720 to submit a written statement of actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this report. A written statement must also be submitted to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the Chairmen and Ranking Minority Members, Senate and House Committees on Appropriations, Senate Committee on Armed Services, House Committee on National Security, Senate Committee on Governmental Affairs, and the House Committee on Government Reform and Oversight; the Director, Office of Management and Budget; and the Secretary of the Army. We will also provide copies to others upon request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report were Robert J. Stolba, Charles Burgess, Nora Landgraf, William T. Woods, and Margaret L. Armen. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated July 28, 1998. 1. We are not persuaded by DOD’s assertion that the key performance parameters for VROC and missile load should be evaluated independently. While DOD’s documentation for the Apache Longbow program has been inconsistent in discussing Apache Longbow requirements, the ORD, Acquisition Program Baseline, Defense Acquisition Executive Summaries, Selected Acquisition Reports, and the aircraft production contract itself are uniform in that they simultaneously address VROC and missile load in discussing the Apache Longbow’s operational missions and, therefore, clearly demonstrate the interrelationship of VROC and missile load. DOD’s response attests to this interrelationship when it refers to VROC and missile load in the Acquisition Program Baseline as the basis for its VROC calculation. 2. Our analysis clearly shows that the Apache Longbow cannot meet the VROC requirement in the combat mission configuration when carrying a full fuel load and 12 missiles—either as specified in the ORD or validated by the Joint Requirements Oversight Council. The issue addressed in our report is whether or not the Apache Longbow can meet its required VROC while carrying the necessary missile load to accomplish its required mission. Our report documents that the Apache Longbow with the required full fuel load is too heavy to meet the VROC requirement for the combat mission specified in the ORD. The VROC requirement in the ORD is 450 feet per minute—the key performance parameter. This ORD key performance parameter remains the same whether VROC is measured with 4 air-to-air missiles and 8 Hellfire missiles or the validated requirement for 12 Longbow Hellfire missiles. The VROC cannot be met under either condition. DOD did not present support for its contention that the Acquisition Program Baseline shows that the Apache Longbow can achieve the required VROC. In fact, DOD is incorrect in its assumption that the November 1995 full-rate production Baseline calls for the calculation of VROC based only on eight Hellfire missiles. The Baseline that DOD cites refers to only one mission—the primary mission. According to the October 1995 Acquisition Decision Memorandum, the full-rate production Baseline should have defined this mission based on the VROC and missile load key performance parameters validated by the Joint Requirements Oversight Council in October 1994. Significantly, the Army recognized in the Baseline that the required VROC in the primary mission with 12 Longbow Hellfire missiles could not be achieved unless fuel or ordnance are reduced. Without these reductions, the helicopter’s VROC, in the primary mission, would be significantly lower than 450 feet per minute. While the Army did not update the ORD to reflect the key performance parameters, it did modify the Apache Longbow Selected Acquisition Report, as early as December 1994, to reflect the VROC and missile load key performance parameters that the Council validated. Finally, the September 1995 Army Material System’s Analysis Activity’s independent evaluation of the Apache Longbow weapon system reported that neither version of the airframe could meet VROC requirements without reducing weight by about 590 pounds. 3. We disagree with DOD’s assertions regarding the VROC performance of the non-radar-equipped Apache Longbow. The ORD states that an adequate VROC to ensure combat effectiveness must be maintained with or without the radar. Further, when discussing the Apache Longbow’s maneuverability and agility, the ORD states that the performance of the non-radar-equipped aircraft should equal or exceed that of the radar-equipped aircraft. 4. The ORD and the Director, Operational Test and Evaluation’s report on the Apache Longbow show that the Army expects to use the non-line-of-sight radio for transferring targeting data between aircraft. The ORD states that the primary use of digital data will be for targeting purposes. This data can then be shared with other non-radar-equipped helicopters for warfighting, situational awareness, and to coordinate battlefield information. The ORD specifies that this communication capability requires non-line-of-sight communications, and the Army plans to provide this capability with the ARC-220 radio. The Director’s 1995 report states that varied or obstructed terrain caused significant communication problems, which indicates that the lack of non-line-of sight communications capability resulted in the inability to pass target data from radar-equipped Apache Longbows to non-radar-equipped helicopters. In another phase of operational testing, the flat, open terrain, which afforded clear line-of-sight communications, was cited as the main reason for a lack of communication problems. Furthermore, DOD’s assertion that the helicopter can transfer high-volume targeting data over the existing communications suite is only applicable when aircraft are in line of sight. Without the non-line-of-sight communications capability that the ARC-220 radio provides, the Apache Longbow will continue to experience target handover problems when operating in environments other than a flat, open terrain. Because of the Army’s plan to reduce ARC-220-equipped helicopters by 50 percent and evidence that indicates the fielding delay will be longer than DOD reports, we continue to believe that there will be an overall reduction in the Apache Longbow’s planned lethality and survivability. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Army's Apache Longbow helicopter program to determine if its operational requirements will be met, focusing on whether the Apache Longbow will meet: (1) the validated key performance requirement for vertical rate of climb (VROC); and (2) the requirement to transfer target data between Apache Longbow helicopters. GAO noted that: (1) the Apache Longbow program needs to be reassessed because the helicopter does not meet two key user requirements; (2) the Army's 227 radar-equipped Apache Longbow helicopters will be too heavy to achieve the validated VROC requirement of 450 feet per minute in the combat mission configuration when carrying the required 12 Longbow Hellfire missiles and a full fuel load; (3) as a result, the helicopters will not have acceptable levels of maneuverability and agility to successfully operate in combat; (4) even though the Apache Longbow is reported to have significantly greater overall capability than the original Apache, its VROC and corresponding maneuverability will be less than that of the original Apache; (5) Army plans to modify the system will add weight and therefore exacerbate this problem; (6) the impact of weight on the ability of non-radar-equipped Apache Longbow helicopters to achieve VROC performance requirements is even greater because of the less-powerful engines used in these helicopters; (7) at initial operational capability, the Apache Longbow will not have a radio that will allow it to transfer target data between helicopters when concealed or not in the line of sight; (8) unresolved technical issues have delayed the radio's development; (9) the Army plans to install the non-line-of-sight radio on only one-half of the total Apache Longbow helicopter fleet; and (10) the lack of this capability throughout the fleet results in an overall reduction in lethality due to the inability to transfer target data between Apache Longbow helicopters and decreased survivability caused by the greater exposure to hostile forces.
Over the past 20 years, there have occasionally been federal lapses in appropriations that led to government shutdowns. The longest of these shutdowns lasted 21 calendar days, from December 16, 1995, to January 6, 1996. The most recent government shutdown—the subject of this report—occurred at the beginning of fiscal year 2014 and lasted for 16 calendar days, from October 1 to 16, 2013. According to OMB, this shutdown resulted in agencies furloughing federal employees for a combined 6.6 million work days. Furloughed federal employees were retroactively paid and OMB estimated the cost to be about $2 billion. Types of Employee Furloughs A furlough is the placing of an employee in a temporary nonduty, nonpay status because of a lack of work or funds, or for other nondisciplinary reasons. There are two types of furloughs: A shutdown furlough occurs when there is a lapse in appropriations, and can occur at the beginning of a fiscal year, if no funds have been appropriated for that year, or upon expiration of a continuing resolution, if a new continuing resolution or appropriations law is not passed. In a shutdown furlough, an affected agency would have to shut down activities funded by annual appropriations that are not excepted by law. An administrative furlough is a planned event by an agency designed to absorb reductions necessitated by downsizing, reduced funding, lack of work, or any budget situation other than a lapse in appropriations. Office of Personnel Management, Pay & Leave Furlough Guidance, accessed Sept. 10, 2014. See also Office of Personnel Management, Guidance for Shutdown Furloughs (Oct. 11, 2013) and Office of Personnel Management, Guidance for Administrative Furlough (June 10, 2013). Congress may provide budget authority to agencies through the passage of appropriations acts, which permits agencies to incur obligations to spend federal funds. When one or more of the appropriations are not enacted, a funding gap may result and agencies may lack sufficient funding to continue operations. Funding gaps occur most commonly at the beginning of a fiscal year when new appropriations, or a continuing resolution, have not yet been enacted. In this context, a gap may affect only a few agencies (if, for example, only one appropriation act remains unenacted as of October 1) or agencies across the federal government. We have previously reported that funding gaps, actual or threatened, are both disruptive and costly. One of the key issues related to a government shutdown is determining what activities and programs an agency is permitted or required to continue when faced with a funding gap and resulting shutdown. Except in certain circumstances when continued activities are authorized by law, the Antideficiency Act generally restricts agencies from continuing operations funded by annual appropriations during a government shutdown. Activities Which May Continue During a Government Shutdown The categories of activities which may continue fall into two broad categories. The first category is obligations authorized by law. Within this category, there are four types of exceptions: Activities funded with appropriations that do not expire at the end of the fiscal year, that is, multiple-year and no-year appropriations. Activities authorized by statutes that expressly permit obligations in advance of appropriations, such as contract authority. Activities “authorized by necessary implication from the specific terms of duties that have been imposed on, or of authorities that have been invested in, the agency.” For example, there will be cases where benefit payments under an entitlement program are funded from other than 1-year appropriations (e.g., a trust fund), but the salaries of personnel who administer the program are funded by 1-year money. As long as money for the benefit payments remains available, administration of the program is, by necessary implication, authorized by law, unless the entitlement legislation or its legislative history provides otherwise or Congress takes affirmative measures to suspend or terminate the program. Obligations “necessarily incident to presidential initiatives undertaken within his constitutional powers,” for example, the power to grant pardons and reprieves. This same rationale would apply to legislative branch agencies that incur obligations “necessary to assist the Congress in the performance of its constitutional duties.” The second broad category reflects the exceptions authorized under the Antideficiency Act (31 U.S.C. § 1342)—emergencies involving the safety of human life or the protection of property. In the event of a government shutdown, OMB is responsible for ensuring that agencies have addressed the essential actions needed to effectively manage the shutdown; OMB does so by providing policy guidance and shutdown-related instructions. In particular, OMB Circular No. A-11 directs federal agencies to develop contingency plans for use in the event of a government shutdown and to update those plans on a recurring basis. On September 17, 2013, OMB published guidance for executive branch agencies on how to prepare for and operate in the event of a government shutdown (including guidance on grant and contract administration) stating that agencies should be prepared for the possibility of a lapse in appropriations and resulting government shutdown. The guidance also instructed them to update their shutdown plans consistent with section 124.2 of OMB Circular No. A-11, which directs agencies to include the following in their shutdown plans: a summary of activities that will continue and those that will cease, the amount of time needed to complete shutdown activities, the number of employees on-board prior to shutdown, and the number of employees to be retained during the shutdown, among other information. Agency Shutdown Contingency Planning Given the inherent uncertainty of a lapse in appropriations, an agency’s contingency plan should include actions to be taken during a short lapse in appropriations (1 to 5 days) as well as identify changes necessary should the lapse continue for an extended period. A plan should designate personnel responsible for implementing and adjusting the plan in response to duration or changes in circumstances that may arise. The contingency plan should include a summary of significant agency activities that will continue and those that will cease as a result of the lapse in appropriations, the total number of agency employees expected to be on-board before implementation of the plan, and an estimate of the time (to the nearest half day) needed to complete shutdown activities. If shutdown activities cannot be completed in a half day, the time to completion, the number of employees necessary to accomplish shutdown, and the specific nature of the activities being performed should be identified. Additionally, the contingency plan should address the total number of employees retained for each of the following categories: Those whose compensation is financed by a resource other than annual appropriations. Those necessary to perform activities expressly authorized by law. Those necessary to perform activities necessarily implied by law. Those necessary to the discharge of the President’s constitutional duties and powers. Those necessary to protect life and property. The plan should also describe in detail for each component within the agency the total number of employees in the component to be on-board before implementation of the plan. Additionally, it should identify the total number of employees to be retained in the component under the plan: Exempt employees whose compensation is financed by a resource other than annual appropriations. Employees who perform excepted activities expressly authorized by law. Employees who perform activities necessarily implied by law. Employees who are necessary to the discharge of the President’s constitutional duties and powers. Those employees necessary to protect life and property. Lastly, the agency should explain the legal basis for each of its determinations to retain the categories of employees including a description of the nature of the agency activities in which these employees will be engaged. In its July 2014 guidance update, the Office of Management and Budget (OMB) directs agencies to consider preparations for resumption of activities. The guidance suggests the agency plan describe actions necessary for the resumption of orderly operations. This description should include methods for notifying employees that the furlough has ended; supervisory flexibilities to address employees who have difficulty returning when operations resume; plans for restarting information technology systems; and procedures for resuming program activities, particularly grants and contracts. The 2013 government shutdown came at a time of continuing budgetary uncertainty, which we previously reported limits the operations of federal agencies. To describe the effects of the shutdown, particularly on agency operations and services, grants, and contracts, we selected three federal departments and components (see table 1). Across the federal government, departments and components have varying periods of duration for which budget authority is available for obligation, such as no-year, multi-year, and annual funds. The duration of budget authority is important to agencies in a government shutdown, as differing durations afford differing degrees of flexibility. Departments and components also have varying degrees of transfer authority and reprogramming limitations, which also afford agencies differing degrees of flexibility in the event of a shutdown. For example, DOE generally receives no-year funds (i.e., appropriations available for obligation without fiscal year limitation) and as a result, DOE may use funds not yet obligated from prior fiscal years. On the other hand, most of DOT’s appropriations are available for one year. Generally speaking, appropriations providing for periods of time longer than one year, such as multi-year and no-year, afford agencies more flexibility, which may be helpful in terms of being able to continue some activities during a government shutdown. Federal grants and contracts are important tools through which the federal government provides program funding to implement initiatives and provide goods and services. At $546 billion in fiscal year 2013, federal grants to state and local governments accounted for about 16 percent of total federal outlays. The National Association of State Budget Officers reported that federal funds comprised an estimated 31 percent of state spending in fiscal year 2012. HHS is the largest federal grant-making department, and according to USASpending.gov, HHS and DOT awarded about $337 billion and $56 billion in grants in fiscal year 2013, respectively. In addition, federal procurement spending was over $459 billion in fiscal year 2013. According to USASpending.gov, HHS and DOE awarded $20 billion and $24 billion in contracts in fiscal year 2013, respectively. DOE is the largest civilian contracting department and spends an estimated 90 percent of its annual budget on contracts. The shutdown impacted some of the selected departments’ and components ’ operations and services due to activities undertaken to help prepare for furloughing federal employees and determining which programs could continue. According to some officials at the selected departments and components, planning for the shutdown took resources away from administering their daily operations and services. Officials said that quantifying the total time spent or overall cost of shutdown preparations and implementation was difficult. However, some officials said time spent preparing for the shutdown led to a loss of productivity. For example, for the last 2 weeks of September and through the first 2 weeks in October, EM management was consumed by work concerning the shutdown. A department level official stated that DOE spent an extraordinary amount of time reprioritizing its budget to help ensure as many programs and functions as possible remained open and to help minimize the impact of the shutdown on the agency’s operations. Similarly at the component level, during the month of September EM headquarters began collecting information from each field office in order to estimate how long each field office could sustain operations with current funds. EM headquarters asked the field offices for information such as unobligated balances available to support continued operation into fiscal year 2014, the number of days supported by available balances, and the number of federal and contractor employees impacted. EM officials estimated that budget, procurement, and management officials spent at least 50 percent of their time in September preparing for the shutdown. In their contingency plans, the selected departments documented the planned number of employees that were to be furloughed and those who should continue to work during the shutdown (referred to as excepted employees in some cases). Specifically, DOE, HHS, and DOT planned to furlough 96 percent, 52 percent, and 33 percent of their employees, respectively, as shown in table 2. According to officials at these departments, the contingency plans were reasonably accurate representations of how the shutdown was implemented in terms of total employees actually furloughed and those that continued to work. However, as discussed later and shown in the table below, DOE officials told us that they were able to avoid furloughing any EM federal employees because prior year balances were available to fund program direction accounts. Similarly, DOE officials said the Energy Information Administration was the only office within DOE that furloughed employees during the shutdown. During the shutdown, management officials that were not furloughed at the selected departments and components reevaluated their contingency plans to determine which employees were needed for excepted functions that arose during the shutdown. As a result, the number of furloughed employees changed during the shutdown. For example, according to NIH officials, mailroom employees were initially furloughed, but NIH realized that some mailroom employees were needed to pick up mail that was delivered to NIH including bills that needed to be paid to keep the facilities minimally operational. NIH brought some mailroom employees back to work as a result. The selected departments and components reported that the shutdown was disruptive to their operations and services resulting in immediate disruptions to selected programs that were affected by the shutdown. For example, at NIH protecting human life and property continued to be a focus when deciding which activities to continue or close. During the shutdown, NIH’s Clinical Center, also called the House of Hope, reduced patient morale services such as library services and pet therapy because of employee furloughs. According to NIH officials, while it is difficult to measure the influence of these services on patients’ health and well- being, these services help improve the morale of patients at the center. As issues arose in patient care at the NIH Clinical Center during the shutdown, specialists or professionals were brought in to take care of the need and then furloughed when they completed the task. In addition, according to NIH officials, the clinical trials registry was initially closed because of the shutdown but was reopened during the shutdown. At the beginning of the shutdown the employees who manage ClinicalTrials.gov were furloughed. While the public side of ClinicalTrials.gov was still accessible, the submission side of the database was closed, preventing new trials from being registered to the database and all updates to existing entries. According to NIH officials, clinical trials may be the only hope for some patients with life-threatening diseases. Also, agency officials said that NIH received concerned calls from members of Congress and institutions performing clinical trials, as well as queries from the media. To handle the posting of new clinical trial registrations and critical trial updates to ClinicalTrials.gov, NIH recalled a small number of ClinicalTrials.gov employees on October 4, 2013, and reopened the data submission system. The ClinicalTrials.gov employees continued to facilitate registration and other critical trial updates through the remainder of the shutdown. A message was posted on the website and sent to NIH’s extramural community as well as to registered users of ClinicalTrials.gov to inform them of this decision. In making the decision to reopen ClinicalTrials.gov, NIH officials said they considered a number of factors, such as the uncertainty of the length of the shutdown and the fact that the registration of trials in ClinicalTrials.gov provides critically important information to patients seeking opportunities to participate in clinical trials, including important information about details of enrolling. In terms of immediate disruptions within DOT, the Merchant Marine Academy, funded by DOT and administered by the Maritime Administration, was closed. According to DOT’s contingency plan, an extended lapse in appropriations could significantly disrupt the Merchant Marine Academy’s academic schedule, making adjustments difficult to accommodate. To lessen the impact of the closure, the academy revised its academic calendar to allow eligible students to graduate as previously scheduled. In terms of immediate disruptions to FTA’s operations and services, FTA officials stated that in the short term the shutdown may have affected the delivery at the local level of public transportation services in certain parts of the country. To identify programs that would be affected by the shutdown, the selected departments used shutdown guidance from OMB and determined the funding sources for their programs in collaboration with their general counsel, budget officers, and human resources officials. See figure 1 for examples of programs or activities within the selected departments that did not continue and those that did continue during the shutdown. The shutdown created uncertainty for the three selected departments regarding how employees and programs would be affected. Officials at the selected departments said they were concerned about how long the shutdown would last and the uncertainty of the length affected employees. However, several mitigating factors in the preparation and implementation of the shutdown lessened the uncertainty of the effects on the operations and services of the three departments and the workforce. These factors include (1) experience with prior budget uncertainties including past preparations for potential shutdowns; (2) funding flexibilities, such as multi-year appropriations; and (3) ongoing communication internally and with OMB staff and OPM officials. According to officials at the three departments, employees were familiar with shutdown guidance and procedures from previous potential shutdown preparations and planning for operating under a continuing resolution or in other environments of budgetary uncertainty. For example, according to DOT officials, DOT has experience with managing budget uncertainty from addressing the sequestration of funds, shortfalls within its Highway Trust Fund, and past preparations for possible government shutdowns. DOT officials explained that they have planned extensively for continuity of operations in times of budgetary uncertainty working with their management team—officials from human resources, budget, general counsel, communications, and the modal administrations—to assess and plan for the potential situations that would affect the department’s budget. Similarly, according to DOE officials, during August and September DOE prepares to address potential budgetary uncertainty that would impact its operations and services in anticipation of a potential continuing resolution or other situation affecting funds. The officials indicated that these yearly preparations helped ensure that their planning and processes for the shutdown were implemented and the department was well prepared to keep its operations running as funds allowed. Specifically, DOE has an order that it follows if Congress fails to pass appropriations for the new fiscal year by October 1. The order describes the procedures DOE will follow 1) for continuing operations using available balances such as unobligated balances from unexpired prior-year appropriations and 2) upon exhaustion of all available balances, continuing only those functions excepted from a shutdown. At the component level, although EM reprioritized its budget, officials said they were concerned about the length of the shutdown. Having flexibility in funding beyond the annual appropriations for some of its programs helped the selected departments manage the effects of the shutdown on their operations and services given that some programs were able to continue throughout the shutdown, which also reduced the number of employees furloughed. For example, most of DOE’s appropriations are multi-year or no-year. DOE’s programs with multi-year and no-year funding were allowed to continue their activities during the shutdown by expending unobligated balances of prior year money still available for obligation. Each organization or program within DOE planned to operate during the shutdown until available funds to pay federal employees were exhausted. According to DOE officials, prior to the shutdown, the budget office reevaluated budget priorities in order to have funds available to meet payroll and to help ensure that the highest priority activities, such as the oversight of nuclear materials at EM, could continue during the shutdown. Several HHS components have mandatory spending programs or operate programs using user fees that were not affected by a lapse in annual appropriations. Therefore, the employees managing the programs in these components were generally considered excepted. For example, the Administration for Children and Families and the Centers for Disease Control and Prevention have mandatory spending programs such as the Health Profession Opportunity Grants and the World Trade Center health program, respectively, which continued during the shutdown. Also, the Food and Drug Administration continued activities related to its user fee funded programs including activities in the Center for Tobacco Products. DOT also has several activities that are funded by other sources including multi-year appropriations or contract authority. For example, the Federal Aviation Administration’s Airport Improvement Program is funded through contract authority, while the Maritime Administration used reimbursable Department of Defense funding to continue its Ready Reserve Force. The Highway Trust Fund, which funds several DOT programs, was established by law to hold federal highway-user taxes that are dedicated for highway and transit projects and is also a multi-year fund. Although FTA furloughed most of its federal employees, it used prior year emergency supplemental appropriations that were available to continue its Hurricane Sandy activities. Officials at the selected departments commented that communication within their departments and with OMB and OPM was very important to the preparation for and implementation of the shutdown. According to these officials, prior to the shutdown, some of the departments posted furlough guidance and frequently asked questions from OMB and OPM on their websites to ensure employees were provided with the latest information regarding the potential for a shutdown. During the shutdown, management officials at the three departments who were not furloughed reevaluated their contingency plans to determine which employees were needed for excepted functions that arose during the shutdown. According to officials at the three selected departments, all decisions were made in collaboration with their general counsel, budget officers, and human resources officials, and in line with guidance from OMB. At DOT, prior to and during the shutdown, the management team met to determine what actions were needed to maintain operations and services in the event of a government shutdown. The management team was convened in August 2013 to start planning and met frequently in the days before and during the shutdown. According to DOT officials, some days there were multiple meetings held to make the necessary determinations and after the team reached consensus on a particular issue the requisite action was taken. Specifically, DOT officials said they continually monitored how employees were affected by the shutdown through their daily management meetings. According to DOT officials, DOT developed an emergency furlough recall form to document when an employee needed to be recalled temporarily back to duty because of an emergency situation, such as services needed to protect life and property. The form allowed management to document its request for employee recall in addition to outlining the internal steps that needed to be followed to obtain approval and document the process. According to DOT officials, management used a tracking log to ensure these emergency recalls were internally monitored throughout the shutdown. DOT did not update its contingency plan after the shutdown. According to HHS officials, throughout the shutdown key management officials were in constant communication to monitor the effects of the shutdown on department operations and services including employee furloughs. For example, the number of employees furloughed at HHS changed according to the circumstances for each day of the shutdown, according to HHS officials. Employees that completed the end of the fiscal year financial close out for HHS initially were not furloughed, but after they finished the financial close out they were furloughed. While HHS did not document the number of employees furloughed each day during the shutdown, HHS officials said, the key officials made decisions orally as they were in constant communication throughout the shutdown. HHS updated its contingency plan during the shutdown on October 10, 2013, to reflect the number of employees actually involved in the orderly phase-down and suspension of operations. Specifically, on October 1, HHS estimated that 1,103 employees were engaged in this activity, while on October 10, HHS estimated that actually 429 employees were engaged in this activity. HHS did not update its contingency plan after the shutdown. DOE officials said ongoing communication with management at the department and component level allowed them to closely monitor the shutdown and make decisions as needed to keep programs operational based upon the availability of funding, as discussed earlier. For example, although DOE planned to furlough some of the EM staff at headquarters and at each site that it oversees, EM officials said that their federal employees were not furloughed as projected in the contingency plan. DOE officials said the Energy Information Administration was the only component that furloughed a number of employees during the shutdown. According to departmental officials, DOE managed its contingency plan throughout the shutdown in an effort to assess the needs of the department and components regarding furloughs for employees at EM and its other components. However, DOE did not formally update its written plan after the shutdown. Officials at the three departments we reviewed said the daily communication with OPM and OMB to address questions about the shutdown was beneficial and helped lessen the uncertainty of the situation. According to OMB staff and OPM officials, OMB and OPM had daily telephone communication with agencies’ management to address questions such as clarifying guidance and answering questions from agencies relating to recalling furloughed employees. Issues discussed included unemployment compensation for furloughed employees and the treatment of the Columbus Day holiday for excepted employees. In addition, OMB staff indicated that they addressed questions from agencies regarding how to communicate shutdown information to their employees through these phone calls. Prior to the shutdown, OMB provided guidance to agencies instructing them to update their contingency plans and to determine the number of furloughed and excepted employees. During the shutdown, OPM posted frequently asked questions on its website to provide details on how employees’ pay, leave, and other personnel functions would be affected by the shutdown. Consistent with Circular No. A-11, OMB staff told us that after the shutdown, they did not direct agencies to update their contingency plans or document in another way how the shutdown was planned for, managed, or implemented in terms of lessons learned for future reference. For a longer shutdown, such as one lasting more than five days, documenting what actually happened once a shutdown is over including how operations were fully resumed could help agencies better prepare and plan in the event of a future shutdown. Federal standards indicate that agencies should identify, record, and distribute pertinent information to the right people in sufficient detail, in the right form, and at the appropriate time to enable them to carry out their duties and responsibilities. Without documentation of how the shutdown was implemented, agencies may not have timely, appropriate information that could help inform planning and implementation efforts in the event of a future shutdown. The selected departments did not report any longer-term effects from the shutdown on operations and services and indicated that it is difficult to assess the effects from the shutdown in isolation from other budgetary events, such as sequestration. However, officials raised concerns about how the timing of the shutdown, occurring soon after sequestration, affected the morale of its employees. The impacts of sequestration on the workforce, which included reduced employee travel, training, and monetary awards, affected employee morale. According to department officials, this effect on employee morale was further exacerbated by the lapse in appropriations. According to HHS officials, once the shutdown ended, the department’s functions were fully reestablished and operations were back to normal within 6 months of reopening. Department officials do not anticipate any longer-term effects from the shutdown on departmental operations and services. However, according to HHS officials, the morale of HHS employees was low as a result of being furloughed. Also, the longer the shutdown lasted, the more challenges arose that they had to unexpectedly manage. For example, furloughed federal employees were eligible to apply for unemployment insurance benefits, which the Department of Labor is responsible for administering. HHS needed guidance on how to instruct its employees to apply for unemployment insurance since each state has different rules and procedures. HHS officials said they could not provide general guidance to all employees. After the shutdown, the department had to work with OPM to instruct its employees on how to repay the insurance benefits since they received back pay. At the component level, NIH officials stated the longer-term effects of the shutdown are much harder to detect than any immediate disruptions to the operations of the agency. According to NIH officials, there are also less tangible impacts, such as external partners’ decreased trust and confidence in NIH to continue funding their research after it is approved. According to DOT officials, they are unable to identify any longer-term effects of the shutdown on their operations and services at the department level. However, DOT officials commented that the shutdown lowered the morale of their employees. Officials explained that furloughed employees were concerned about when they could return to work and whether they would be paid. DOE officials stated that the shutdown did not have any longer-term effects on operations and services at the department level. Because the shutdown came early in the fiscal year, the impact on agency’s operations and services was minimized. A DOE official indicated that as a result of the department’s annual data collection in August-September on programs that could be affected by a lapse in appropriation DOE officials were able to reevaluate budget priorities to maintain programs in order to minimize effects on operations and services. The data collection is to determine unobligated balances and which programs may have insufficient funds for payroll on October 1. The effects of the shutdown on contracts and grants management were more pronounced at the component level than at the department level for the three selected departments we reviewed. Department-level officials provided contract and grants management guidance from OMB to their components. For example, DOE, HHS, and DOT provided guidance to program offices at EM, NIH, and FTA, respectively, that were responsible for implementing that guidance. DOT reported that grants management is typically coordinated at the component level, so managing shutdown- related activities for contracts and grants at the component level was consistent with standard practices. HHS reported the same for contract and grants management. Officials at DOT and HHS stated that conducting program oversight at the component level helped minimize the effects of the shutdown at the department level at DOT and HHS. DOE noted disruption in its departmental oversight of contracting activities in its field offices. All NIH and FTA grants management officials were furloughed and were generally unavailable to assist grantees. According to NIH officials, appropriate assistance would have been provided if an incident had arisen in which a grant supported an excepted activity, such as the protection of human life and property, but this situation did not occur. At NIH, the grant peer review process used to assess grant applications had to be rescheduled, adding to potential grant recipients’ uncertainty about the timing of their research. NIH had to reschedule the review process for over 13,700 grant applications. According to NIH, these applications were scheduled to be reviewed by over 8,000 reviewers in over 400 panels during the time of the shutdown. According to NIH, the average delay from the original panel date to the rescheduled panel date was 46 days, but after the shutdown NIH and the external peer reviewers were able to act quickly and complete this process in time for all but 31 applications to meet the next milestone in January 2014. NIH reported that deferring 31 applications to a later date is well within the typical range of applications normally deferred for reasons other than a government shutdown. The National Institutes of Health’s Peer Review Process for Grant Applications The National Institutes of Health (NIH) handles approximately 80,000 research applications and engages approximately 20,000 reviewers per year during three annual peer review cycles. The NIH dual peer review system is mandated by statute and all applications are subject to peer review. The first level of review is carried out by a Scientific Review Group composed primarily of nonfederal scientists who have expertise in relevant scientific disciplines and current research areas. Selection of scientists for peer review panels for the various study sections is a multi-step process and it takes months to vet and balance scientific expertise. NIH officials coordinate the nomination process, peer reviewer vetting, and meeting schedules for the peer review cycle. The second level of review is performed by National Advisory Councils or Boards within NIH’s Institutes and Centers. Councils are composed of both scientific and public representatives for their expertise, interest, or activity in matters related to health and disease. Appointed members usually serve a 4-year term (or usually 6-year terms in the National Cancer Institute) and require approval by the Secretary of the Department of Health and Human Services or in some cases by the President of the United States. Only applications that are recommended for approval by both the Scientific Review Group and the Advisory Council may be recommended for funding. Final funding decisions are made by the NIH Institute and Center Directors. See 42 U.S.C. § 289a. NIH outlines the requirements of the peer review at Scientific Peer Review of Research Grant Applications and Research and Development Contract Projects, 42 CFR Part 52h. NIH and FTA grant recipients’ access to funds varied depending on agencies’ grant processing systems and policies as well as the timing of grant milestones. While grants management officials were furloughed and unavailable, according to NIH officials, the bulk of NIH grants allowed recipients to continue to draw down funds during the shutdown. The exception was if the drawdown request triggered a Payment Management System internal control flag. For example, some of the controls compare requests to previous quarter draws, check if reporting requirements are all met, and verify that the funds requested are within the grant’s budget period. NIH officials said that there were two holds triggered during the shutdown for NIH grants that were resolved by the excepted employees. Among all Payment Management System users, 561 payments triggered a hold during the shutdown. According to HHS officials, of these 561 payment requests, 84 requests valued at $65 million were paid despite the hold and 477 requests valued at $165 million were not paid. Grantees were notified about each payment that was rejected and instructed to resubmit after the shutdown ended. Like NIH, FTA grants management officials were furloughed and unavailable to answer questions or provide assistance to grant recipients. However, FTA did not process any new grant awards or make payments to its existing recipients during the shutdown. According to FTA officials, this had a minimal effect because FTA’s grant processing system is typically offline and unavailable in early October for end-of-year closeout processing and no new grant awards are processed. Specifically, FTA officials told us that FTA had not awarded any grants from October 1 to 16 in the 5 years prior to the shutdown. FTA does, however, normally make payments on existing grants in early October, but no payments were made during the shutdown because employees were furloughed. While contract activities generally continued at EM and NIH, some contractors experienced disruptive personnel actions and program challenges. For example, EM did not furlough any federal contracting employees during the shutdown because most of its programs were exempt at the beginning of the shutdown and the agency had multi-year appropriations available, but several EM contractors that operate and maintain EM facilities experienced reductions because EM issued stop work orders. These reductions included layoffs or forced use of annual leave for 61 contracted employees at EM’s Portsmouth Paducah Project Office and 1,715 contracted employees at its Savannah River Remediation Site. Agency officials also told us that the looming possibility of employee furloughs and the potential termination of contracts decreased morale among contractor employees in the weeks leading up to the shutdown. Although the availability of multi-year funding gave EM some flexibility to fund contracts, EM officials were still limited in their ability to manage contracts and meet programmatic requirements. EM officials told us that if EM had more advance notice of a potential lapse in funding, federal and contractor personnel would have been better able to strategize as to how to prepare for the funding lapse and would have slowed work at the end of fiscal year 2013 to carryover more funds into fiscal year 2014 so that work on contracts could continue. EM officials also reported challenges with keeping equipment—particularly equipment for nuclear processes— running and staffed at necessary levels during the shutdown. For example, officials told us that at the Savannah River Remediation Site they had to reduce contractor support for operating melters—equipment that processes nuclear waste into more stable forms such as solid glass for safer long-term storage—to the lowest possible levels during the shutdown. The officials also said that if the shutdown had continued much longer, existing funding for even the minimum level of staffing and operation would have been in jeopardy. Officials told us that melters can never be shut down because if the temperature of the melters fell below a certain threshold, the melters would not restart and would therefore become permanently unusable. The officials said that DOE was in the review process for excepting the melters from the shutdown if the shutdown had continued past October 31, 2013. Conversely, NIH officials told us that most of their contracting officials were furloughed during the shutdown and that having limited staffing and systems made it hard to complete excepted activities during the shutdown. For example, some of the agency’s computer systems that are used to locate and assemble contract data were unavailable during the shutdown and many of the officials who run reports and analyze data were furloughed. As a result, the excepted officials had to manually search through other means to obtain the information they needed to inform decisions, such as identifying contract options that were set to expire. Officials reported that one NIH contract had an annual option that expired during the shutdown, and 24 additional contracts had options that would also have expired if the shutdown had continued through November 1, 2013. The contract option that expired—an administrative support service contract—received a stop work order during the shutdown and resumed work after the government reopened with no additional implications. After the shutdown ended, NIH officials reported having sufficient time to exercise the options of the 24 other contracts with options set to expire. Unlike EM, NIH officials reported that the agency did not have the flexibility of multi-year or no-year funding to maintain contracting activities and NIH did not have a reserve of available funds from previous years to support contracts during the shutdown. However, according to NIH officials, the agency’s primary system for managing contracts is typically offline for 6 days at the beginning of each fiscal year for scheduled maintenance, during which time limited contracting activities are typically logged into the system. As a result of this scheduled down time at the beginning of the shutdown, NIH officials reported that contracting activity was not significantly behind schedule when the government resumed full operations. NIH and EM officials indicated that they returned to normal levels of contracting activity and operations within a few months and do not foresee any longer-term effects from the shutdown on their contract activity and ability to meet future milestones. NIH officials reported that contracting activity and programs returned to normal levels of activity within 1 month, and EM officials reported that some programs required 4 months to return to pre-shutdown levels of contract activity. According to EM officials, a longer shutdown would have severely affected EM contractors and their employees. Specifically, EM officials reported that if the shutdown had continued for another week, reductions were expected for an additional 1,500 contractor employees from two contractors. Notices of the planned layoffs had already been issued for these contracts prior to the government reopening. Furthermore, EM contractors also planned to issue personnel notices on October 21, 2013, which could have potentially affected another 5,000 contractor employees. FTA and NIH officials indicated that their agencies returned to normal scheduling and timing of grant activities soon after the shutdown. FTA officials said that they were able to recover quickly because grant milestones were not scheduled to occur during the shutdown. Further, FTA does not anticipate any longer-term effects on grants management activity as a result of the shutdown. According to NIH, the NIH employees and stakeholders involved in the peer review process and panels for grant applications worked extra hours to get back on schedule after the government reopened, but did not track the hours or costs associated with this extra work. Because of their efforts, NIH officials said they did not miss the next milestone, the second level of peer review held in January 2014, and anticipated no longer-term effects on grants activity. Officials from associations representing NIH grant recipients noted that while the longer-term effects of the shutdown on scientific research are hard to determine and cannot be quantified, there were other effects from the shutdown that may influence grant recipients and their future research. For example, officials from the associations that we interviewed said some of the grant recipients they represent have reported decreased morale and question if they should rely on federal assistance for their research going forward. Additionally, one association official expressed concern that the shutdown could contribute to scientists working on smaller, more narrowly-defined projects because of financial concerns for future grants, rather than pursuing more innovative, larger projects that would be dependent on a higher cost grant for which the federal government would likely be the primary source of funding. Further, officials from associations representing grant recipients, including those that receive NIH grants, noted that there may have been a loss of research involving the life cycles of live organisms. Prior to the shutdown, NIH determined which officials should be excepted to ensure the welfare of all live specimens in NIH facilities, including over 1.8 million animals. However, according to an official from an association we interviewed, preserving live specimens at the grant recipient level could have been challenging if the recipient was not able to receive the needed funds to continue the research during the shutdown. These officials from associations we interviewed did not provide a specific example of where this problem occurred during the shutdown. However, according to NIH, for some experiments a break in protocol would render the research property (both animate and inanimate) useless and require some of it to be destroyed. Some associations with expertise in federal contracts and grants reported that there was limited guidance leading up to the shutdown from departments about how their federal contractors and grant recipients would be affected by a potential shutdown. According to officials from NIH and some associations with expertise in federal contracts and grants that we interviewed, the lack of guidance contributed to the uncertainty that contractors and grant recipients experienced regarding how the shutdown would affect their ability to continue their work. On September 17, 2013, OMB issued guidance to agencies on planning for a potential shutdown, which included instructions on how agencies should manage grants and contracts during a lapse in funding. However, according to officials at the three departments we reviewed, per OMB’s advice they did not notify their contractors and grant recipients about the possible shutdown until on or after September 26—3 business days before October 1. According to OMB staff, OMB was cautious about advising agencies to notify contractors and grant recipients until it appeared likely that the shutdown was an imminent possibility, in order to avoid unnecessary disruptions to agency operations. According to association officials, grant recipients and contractors across the government had difficulty obtaining accurate, timely information about how their programs would be affected once the shutdown began. Grant and contract management officials who were furloughed could not communicate with grant recipients or contractors to respond to questions or provide other contract and grant support services. Further, officials were generally not available to correct problems on automated payments to grant recipients, as was the case with HHS’s Payment Management System. Problems could be addressed only if the program was funded with funds from a prior fiscal year (i.e., exempt from the lapse in appropriations) or was otherwise excepted, for example if it met the life and safety exception. Another source of uncertainty for federal grant recipients was whether agency programs could continue if they had authorizations that lapsed on September 30, 2013. The authorizations for certain large programs including the Supplemental Nutrition Assistance Program (SNAP), managed by the U.S. Department of Agriculture, and Temporary Assistance for Needy Families (TANF), managed by HHS, expired September 30, 2013. While SNAP continued during the shutdown by using funds authorized under the American Recovery and Reinvestment Act of 2009 that were available through October 31, 2013 and TANF continued by using unexpended prior year balances, the National Association of State Budget Officers and Federal Funds Information for States reported that some states had questions as to whether they would be repaid for providing bridge funding to keep the programs operating past October 31, 2013. HHS issued a statement on September 30, 2013 saying that in the event of a shutdown, the underlying TANF statutes remained in place and states would be permitted to use their unspent federal TANF funds from prior years for expenditures allowable under the TANF statute. The Continuing Appropriations Act, 2014, allowed for states to be reimbursed for expenses they incurred to keep grant programs operational. The effects of the shutdown on federal grant recipients varied based on the timing of milestones and the availability of funding. As the Congressional Research Service reported, it is difficult to determine the exact impact of a federal government shutdown on grant recipients because the impact will vary depending on the current stage of each individual grant award at the time of the shutdown and the duration of the shutdown. Additionally, the impact may vary within a grant program and across programs and agencies. The timing of milestones was a key factor for both federal grant applicants and grant recipients because of the resulting effects on the review process timeline for grant applications and the availability of funds for current grant recipients. Officials from associations with expertise in grants that we interviewed said that grant recipients with funding available during the shutdown reported to the associations that there were minimal effects from the shutdown. At a government-wide level, Grants.gov, the online portal to apply for federal grants which is maintained by HHS, was online and accepting applications, but HHS’s Grants.gov support was decreased from 10 officials to 1 official. According to HHS, 6,725 applications were submitted to Grants.gov during the shutdown, over 10,000 fewer applications than during the same period in 2012. On October 21, NIH began processing the error-free applications submitted during the shutdown or in the two days prior to it. Two of the associations with expertise in federal grants that we interviewed—the National Association of State Budget Officers and Federal Funds Information for States—both noted the importance of grant award type in determining the effects of the shutdown on grant recipients. For example, grant programs that receive mandatory advance appropriations for the first quarter, such as HHS’s State Medicaid Grants, and those funded in authorization acts, such as the Children’s Health Insurance Program, generally continued during the shutdown. Discretionary programs, such as FTA’s Transit Formula Grants, and programs functioning as appropriated entitlements without advance appropriations, such as HHS’s Social Services Block Grant, generally were affected by the shutdown as funding was not available for new grants and furloughs of agency grants management officials could have affected the recipients’ ability to draw down prior year grant funds. Once the shutdown ended, recipients of discretionary grants whose funding was disrupted by the shutdown were eligible for reimbursement from the agencies that manage the grants. There were exceptions to how grant programs functioned during the shutdown. Officials from associations with expertise in federal grants that we interviewed, reported uncertainty among states as to the difference in funding for mandatory and discretionary federal grant programs. Specifically, at least one state publically reported it was uncertain about how a mandatory grant with administrative costs funded with a discretionary grant would be affected by the shutdown. For example, unemployment compensation is a federal-state partnership and the state is entitled to a grant from the Department of Labor to cover the necessary costs of administering the program. While states were responsible for making unemployment compensation payments during the shutdown, states were unclear when they would be reimbursed for the administrative costs incurred during the shutdown. The degree of disruption caused by the shutdown varied among contractors at NIH and EM as well. Whether or not a federal contract was allowed to continue during the shutdown depended on a number of variables including the availability of funds and the extent to which contract employees required supervision by federal employees or access to federal facilities to conduct their work. For example, NIH contractor employees that are located with and receive technical input from federal employees or access to federal facilities were generally furloughed while contracted EM employees generally continued their work at most sites. However, the DOE contingency plan noted that while multi-year and no- year funds would be used initially DOE might need to review the activities of its contractors depending upon the length of the shutdown, the need for government oversight, and the availability of prior-year funding. Only those activities where the suspension of the function of the contract would imminently threaten the safety of human life or the protection of property would be permitted to continue. Officials from industry associations that represent federal contractors told us that the impact of the shutdown on contracted employees across the government varied. Some contractor employees were reportedly laid off during the shutdown and furloughed contractor employees were not necessarily paid by their employer during or after the shutdown. For furloughed contractors, several circumstances influenced whether they could be paid during the shutdown, including whether work was allowable based on the terms of the contract that they worked on, the availability of other assignments, and whether their company chose to or was able to compensate them out of its own pocket if contract funds were unavailable. Specifically, one association we interviewed told us that small businesses with federal contracts were reportedly less prepared to weather the shutdown than larger contractors with revenue sources other than the federal government. In addition to furloughs, some companies with affected contracts required employees to use leave or take paid or unpaid time off if they were unable to reassign employees to training or other nongovernment projects. Contractors who were waiting on their contracts to be renewed at the beginning of the fiscal year also faced added uncertainty as to whether their work would continue once the government reopened. While some grant recipients were able to obtain bridge funding so that programs and services funded through federal grants would continue, this was not sustainable over the long term and was challenged by the uncertainty of the situation. As the Congressional Research Service reports, in cases where the state is the primary grant recipient and is tasked with administering a program that is partially or fully funded by federal grants, the state has the discretion to decide whether to cover any gaps in federal funding to maintain normal program operations, or whether to suspend program operations during the lapse in federal funding. In some cases, suspending program operations may include furloughing state employees. Some states have large numbers of federally funded state employees and at least eight states furloughed state employees during the shutdown. According to an official from one national association representing recipients of federal transportation grants that we interviewed, transportation entities which use the funding for operational costs were especially vulnerable. Some local paratransit services which provide transportation to seniors and persons with disabilities, including providing the elderly with rides to doctors’ appointments, expressed concerns about possible disruptions stemming from a lack of funding if the shutdown continued. During the shutdown, the association officials said that some of these entities had developed plans to prioritize life sustaining travel, but that their flexibility to continue operations was limited if the shutdown continued. Similarly, an association official representing research grant recipients noted that some NIH grant recipient institutions had made plans to bridge the funding during the shutdown, but that this was not an option for all grant recipients and was not sustainable over the long term. As one university wrote in a press statement, it could temporarily use cash reserves in unrestricted fund balances to cover any federal shortfall. However, doing so would affect its cash flow and if the shutdown lasted a few weeks, the university expected the shutdown to have a direct effect on the student academic experience and the university’s ability to continue to do research. Affected funding would include graduate student funding, post‐doctorate funding, and other academic professionals and faculty funding. During the shutdown, it was an issue for states to maintain sufficient cash flow to bridge funds for federally-funded grant programs. Even if a state wanted to use its funds to continue services for a federally funded program, it might not have had sufficient liquid assets to do so quickly. At least 12 states publicly reported that funding for certain grant programs was only confirmed through October, meaning the funding may not have been available if the shutdown had continued into November. Some of these states had expected to discontinue certain federally-funded programs or services if the shutdown had extended into November, while others expressed uncertainty over how they would have proceeded if the shutdown had been longer. For example, the State of Kentucky had expected to run out of funds for several programs before November 1 including SNAP, TANF, the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), and the Low Income Home Energy Assistance Program. Similarly, the State of Colorado noted that if the federal government had not resumed operations or some other remedy had not been put in place, the Colorado WIC program would not likely have been able to serve participants in November. We identified and reviewed the analysis of several economic forecasters who predicted the effects of the October 2013 shutdown on the national economy; however, there were no studies or academic research focused on estimating all the economic effects of the shutdown. The forecasters we identified predicted the effect of the shutdown on the real GDP growth in the fourth quarter of 2013—which included the time of the shutdown— to be from 0.2 to 0.6 percentage points as a result of lost productivity of federal workers. These forecasts were conducted either during or soon after the shutdown ended. None of the economic forecasters we interviewed further analyzed the effects of the shutdown since it ended or plans to conduct future analysis as part of a formal study. The economic forecasters we interviewed do not anticipate the shutdown having any longer-term effects on national economic activity. In January 2014, the Bureau of Economic Analysis (BEA), the agency whose national economic statistics provide a comprehensive view of the U.S. economy in the form of summary measures such as GDP, estimated the direct effect of the shutdown on the real GDP growth in the fourth quarter of 2013 to be a reduction of 0.3 percentage points, which is within the range of estimates provided by the economic forecasters that we identified. According to BEA, the shutdown did not have an impact on current-dollar federal compensation since Congress authorized retroactive compensation for furloughed workers. BEA derived this estimated effect on real GDP from real federal government compensation, one of the components of GDP. BEA adjusted real federal government compensation estimates based on the number of furloughed employees and the number of furlough days, for the reduction in hours worked by federal workers to account for a reduction in services. Thus, BEA estimates show a decline in real GDP based on the lost productivity of the furloughed workers. BEA could not quantify the effects of the shutdown on other components of GDP and the national income and product accounts, such as personal consumption expenditures or private wages and salaries, since they were embedded in the source data that underlie BEA estimates and could not be separately identified. The economic forecasters we interviewed believed the other economic effects of the shutdown to be minimal and they generally limited their analyses to the effect of the shutdown on direct effects of government spending, specifically real federal compensation, as discussed earlier. For example, one economic forecaster who did factor in some of the indirect effects of the shutdown told us that he estimated the loss in real GDP growth to be 0.36 percentage points, out of which only 0.03 percentage points could be attributable to loss of support services or procurement of other services by the federal government such as food and janitorial services. Given the challenge with identifying and analyzing the detailed micro-level data that would let them capture all the ripple effects through the economy, these economic forecasters did not take all the indirect or multiplier effects into account. These economic forecasters told us that effects such as impact on consumer and investor confidence were harder to quantify since it is not possible to isolate the effect of the shutdown from the general political uncertainty at the time of the shutdown. One of the economic forecasters we interviewed believed that at the economy-wide level, when one aggregates the effects over time and across economic participants, these effects cancel each other out because of substitution between activities and goods. For example, the money that was not spent by furloughed federal workers at restaurants near their work places may have been spent at restaurants around their residences or at grocery stores. In addition, two of the four economic forecasters told us that furloughed federal workers may have delayed some purchases rather than canceling them altogether because they were paid retroactively. Though the economic forecasters we interviewed generally did not analyze all the indirect or multiplier effects of the shutdown on the national economy, the economic forecasters told us that they expect the indirect effects to have been minimal due to several factors: Duration and timing of the shutdown. Given that the shutdown lasted for 11 working days, two of the forecasters we interviewed believed it had a modest effect on the national economy. For example, according to the analysis of one economic forecaster, defense gross investment funding is usually set contractually about a year before expected delivery so a shutdown was unlikely to delay private production or construction of defense-related capital from one quarter to the next. Also, if the contractors know that the funding is secure and will be forthcoming eventually, this economic forecaster said a brief shutdown would be unlikely to have an impact on defense gross investment. Similarly, research projects are more likely to have been delayed, rather than canceled, given the temporary nature of the shutdown. We were also told that the timing of the shutdown somewhat constrained its visible impact on the economy because it occurred in the earlier part of the fourth quarter. Specifically, delays in federal spending which occur within a quarter have no impact on the growth rate of federal spending across different quarters, according to the economic forecasters we interviewed. Expectations of federal workers regarding retroactive pay. Two of the economic forecasters we interviewed said the furloughed federal workers expected to be compensated for the time of the shutdown, which prevented major changes in the economic behavior of these employees. Federal employees have historically received retroactive compensation from previous government shutdowns and the day before the shutdown, Congress authorized that members of the armed forces and civilian employees who support these members would receive retroactive compensation. Both of these factors increased federal employees’ expectations that they would receive retroactive compensation once the shutdown ended. This expectation caused them to delay rather than reduce consumer expenditures. The October 2013 government shutdown was disruptive for the selected departments’ and components’ operations and services. According to selected department officials, the time spent preparing to furlough federal employees and determining which programs could continue took resources away from the departments’ daily operations. The shutdown also disrupted federal grants and contracts delivery at our selected departments and components as a result of employee furloughs and payment or work disruptions. Among other factors, the selected departments were aided in managing the uncertainties of the shutdown by their experience with preparing for prior potential shutdowns. The three selected departments reported that their contingency plans were important in directing their shutdown activities and that these plans generally represented how the shutdown was implemented. According to the departmental officials, the departments did not update their contingency plans after the shutdown ended to reflect how they managed the shutdown. OMB requires agencies to submit plans and update them when changes in funding occur and every 2 years starting August 1, 2015. However, OMB does not direct agencies to formally document lessons learned from planning for and implementing a shutdown, as well as resuming activities following a longer shutdown, such as the October 2013 shutdown which lasted more than five days. Federal standards indicate that agencies should identify and record pertinent information to enable officials to carry out their duties. Without documentation of how the shutdown was implemented and lessons learned from a longer shutdown, agencies may not have timely, appropriate information that could help inform planning and implementation efforts in the event of a future government shutdown. In its annual update of Circular No. A-11, we recommend that the Director of OMB instruct agencies to document lessons learned in planning for and implementing a shutdown, as well as resuming activities following a shutdown, should a funding gap longer than five days occur in the future. We provided a draft of this report to the Secretaries of DOE, DOT, and HHS; and the Directors of BEA, OMB, and OPM. We received technical comments from DOE, DOT, HHS, BEA, and OPM which we incorporated into the final report where appropriate. In oral comments received on September 11, 2014, staff from OMB discussed our findings, conclusions, and recommendation. In response to this discussion, we made minor revisions to the recommendation language to reflect the importance of agencies having timely and appropriate information after a longer shutdown. On September 19, 2014, OMB staff did not state whether they agreed or disagreed with the recommendation. However, OMB staff said they agreed on the value of documenting lessons learned and will take into consideration the recommendation as OMB develops next year’s annual update of Circular No. A-11. OMB staff also provided technical comments, which are incorporated into the report where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the report date. At that time, we will send copies to the appropriate congressional committees, BEA, DOE, DOT, HHS, OMB, OPM, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or jonesy@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. You asked us to describe the effects of the federal government shutdown. This report describes (1) how the shutdown of the federal government affected selected agencies’ operations and services, including the immediate and potential longer-term effects; (2) what is known about how the shutdown affected federal contracting and grants, including the immediate and potential longer-term effects, as reported by the selected agencies and associations with expertise in grants and contracts; and (3) what economic studies or reports state about the effect of the government shutdown on national economic activity. For this review, we selected three departments and three departmental components to serve as nongeneralizable case studies: the Department of Energy (DOE) and its Office of Environmental Management (EM); the Department of Health and Human Services (HHS) and its National Institutes of Health (NIH); and the Department of Transportation (DOT) and its Federal Transit Administration (FTA). These departments and components were selected based on the following criteria: (1) having grants or contracts valued over $1 billion in fiscal year 2013, as reported on USASpending.gov; (2) the percentage of federal employees expected to be furloughed as reflected in the department contingency plans; and (3) the potential for longer-term effects from the shutdown on operations, grants, or contracts based on our determination using department contingency plans, the dollar value of grants and contracts programs in fiscal year 2013, and background research (including the Office of Management and Budget’s (OMB) report on the shutdown). We selected a component within each department in order to obtain detailed examples of how components were affected by the shutdown in terms of operations and services as well as grants and contract activity. We assessed the reliability of OMB’s furlough data in its report, Impacts and Costs of the October 2013 Federal Government Shutdown for use in the limited purpose of background information. We checked OMB’s data by comparing the data to the planned furlough data agencies reported in their shutdown contingency plans, and we checked the data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness. We also interviewed selected OMB staff knowledgeable about the data and clarified the process whereby the data were collected and verified. Additionally we assessed the reliability of the federal grants and contracts data as reported on USASpending.gov for the limited purpose of case study agency selection and background information. We assessed the USASpending.gov data for accuracy and completeness based on our recent report on the reliability of USASpending.gov data. While the report noted inconsistencies with USASpending.gov reporting, we determined that selected individual data elements that we used for this report had sufficiently low percentages of unverified information and therefore could be used for the limited purposes of case study selection and background information. We believe these data to be sufficiently reliable for the purposes of this report. To describe the effects of the shutdown on the three departments’ and components’ operations and services, we reviewed the documentation from the department (including the contingency plan and guidance to employees) and components (including details on programs affected by the shutdown) and interviewed relevant departmental and component level officials regarding steps they took for planning for and implementing the shutdown and the effects of the shutdown on operations and services. We reviewed OMB documentation (including its November 2013 report on the effects of the shutdown, guidance to agencies on preparing contingency plans and operating during a shutdown (OMB Circular No. A- 11 and OMB Memorandum M-13-22); reviewed Office of Personnel Management (OPM) guidance to agencies; and interviewed OMB staff and OPM officials. The information obtained from the selected agencies is not generalizable to the rest of the government, but is descriptive of the preparation and implementation of the shutdown and the effect of the shutdown for the selected agencies. We did not assess whether departments correctly implemented the shutdown or whether decisions to except certain programs or employees were in accordance with the law. To describe what is known about how the shutdown affected contracts and grants at the selected departments and components and contractors and grant recipients as reported by associations with expertise in grants and contracts, we reviewed documentation from the selected departments and components (including guidance to the recipients of grants from the components and contractors employed by the components) and interviewed officials managing grants and contracts within the selected departments and components. We targeted our review of grants to HHS and NIH and to DOT and FTA. We targeted our review of contracts to HHS and NIH and to DOE and EM. We also reviewed documentation on how federal grant recipients in general were affected by the shutdown as identified through background research from national associations as well as the Congressional Research Service. We interviewed relevant officials from national associations which either represent contractors and grant recipients or otherwise have relevant expertise, including associations representing recipients of grants from our selected departments and components. In selecting associations with expertise in grants, we applied the following criteria: geographic dispersal of membership; longevity of operations; recommendations from the case study departments and components; and our judgment. For grants, these included the American Association of State Highway and Transportation Officials; Association of American Medical Colleges; Association of American Universities; Community Transportation Association of America; Federal Funds Information for States; Federation of American Societies for Experimental Biology; National Association of State Auditors, Comptrollers, and Treasurers; and National Association of State Budget Officers. For contracts, these included the Professional Services Council and the American Small Business Chamber of Commerce. For information concerning the effect the lapse in appropriations had on federally funded grant programs to the states, we also reviewed documents from various state agencies, governors’ offices, and state press releases. We did not independently validate the information with state officials. To describe what economic studies or reports state about the effect of the government shutdown on national economic activity, we conducted a literature review to identify relevant studies or reports of economic forecasts and analysis and economists who have researched the issue. We found that currently no studies have been undertaken; however, we identified the work of several economic forecasters from financial services firms who had analyzed and written about the effects of the shutdown on the national economy. We reviewed the identified research predicting the effects of the shutdown on the national economy and interviewed several of the economic forecasters who had conducted relevant analyses. Forecasters we interviewed included economists from Goldman Sachs, IHS Global Insight, Macroeconomic Advisers, and Moody’s Analytics. Using the results of the literature review, we also reviewed reports and analyses from federal agencies and conducted interviews with federal officials at the Bureau of Economic Analysis, Congressional Budget Office, Congressional Research Service, and the Council of Economic Advisers. At the time of our interviews, the economists we interviewed from the financial services firms and officials from the federal agencies were not aware of any planned or completed studies or reports on the topic since the shutdown ended. We conducted this performance audit from April 2014 to October 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Janice Latimer and Thomas Gilbert (Assistant Directors), Namita Bhatia Sabharwal, Jessica Berkholtz, Amy Bowser, Diantha Garms, Thomas James, Melissa King, Thomas McCool, Susan Offutt, LaSonya Roberts, Cynthia Saunders, Shelia Thorpe, Holly Williams, and William Woods made key contributions to this report.
The federal government partially shut down for 16 days in October 2013 because of a lapse in appropriations. According to OMB, about 850,000 federal employees were furloughed for part of this time. GAO was asked to describe the effects of the shutdown. This report describes (1) how the shutdown affected selected agencies' operations and services, including immediate and potential longer-term effects; (2) what is known about how the shutdown affected federal contracting and grants, as reported by the selected agencies and associations with expertise in grants and contracts; and (3) what economic studies or reports state about the effect of the shutdown on national economic activity. GAO selected three departments for review—DOE, HHS, and DOT—based on the value of grants and contracts, the percentage of employees expected to be furloughed, and the potential for longer-term effects. GAO reviewed department contingency plans and other documents; economic forecasters' analyses; and interviewed officials from the selected departments and components, BEA, OMB, OPM, associations, and economic forecasters. The 2013 shutdown impacted some operations and services at the three departments that GAO reviewed: Energy (DOE), Health and Human Services (HHS), and Transportation (DOT). For example, at HHS's National Institutes of Health (NIH), initial closure of the clinical trials registry prevented new trial registrations for patients, before NIH recalled a small number of employees to reopen the registry. Similarly, DOT's Merchant Marine Academy closed and required a change to the academic calendar to allow eligible students to graduate on time. However, officials at these departments said that longer-term effects are difficult to assess in isolation from other budgetary events, such as sequestration. Because of employee furloughs and payment or work disruptions, the three departments, their components, grant recipients, and contractors faced delays and disruptions in grant and contract activities during the shutdown, including the following examples: Within HHS, grants management activities at NIH effectively ceased with employee furloughs, although most current grant recipients were able to draw down funds. NIH had to reschedule the review process for over 13,700 grant applications because of the shutdown. After the shutdown, NIH completed the process to meet the next milestone in January 2014. Grants activities at DOT's Federal Transit Administration (FTA) effectively ceased with grants management officials furloughed and no payments made on existing grants. FTA officials said that no new grant awards were processed because of the shutdown, but the effect was minimal because the grant processing system is typically unavailable in early October for fiscal year closeout activities. At DOE's Office of Environmental Management (EM), contract activities generally continued because of the availability of multi-year funding, but more than 1,700 contractor employees who operate and maintain EM facilities were laid off or required to use leave because EM issued stop work orders. EM officials reported some programs required 4 months to return to pre-shutdown levels of contract activity. Researchers' analyses of the economic effects of the shutdown have been limited to predicting its effect on real gross domestic product (GDP) in the fourth quarter of 2013. In January 2014, the Bureau of Economic Analysis (BEA) estimated the direct effect of the shutdown on real GDP growth to be a reduction of 0.3 percentage points. Economic forecasters GAO interviewed believed the other economic effects to be minimal at the economy-wide level. The selected departments were aided in managing the uncertainties of the shutdown by their experience with preparing for prior potential shutdowns, funding flexibilities (such as multi-year funding), and ongoing communications internally and with Office of Management and Budget (OMB) staff and Office of Personnel Management (OPM) officials. OMB staff addressed questions from agencies on how to communicate about the shutdown with their employees, but did not direct agencies to document lessons learned from how they planned, managed, and implemented the shutdown for future reference. GAO recommends that OMB instruct agencies to document lessons learned in planning for and implementing a shutdown, as well as resuming activities following a shutdown should a funding gap longer than five days occur in the future. OMB staff did not state whether they agreed or disagreed with the recommendation.
The term “flexiplace” was initially coined during the pilot as an abbreviation for “flexible workplace.” Since the completion of the flexiplace pilot, OPM has adopted the term “telecommuting” to define workplace arrangements that allow an employee to work away from the traditional work site, either at home or at another approved alternative location. Although the terms “flexiplace” and “telecommuting” are often used interchangeably, for the purposes of this report, we use the term flexiplace only when describing work arrangements that are consistent with OPM’s definition. We found this restrictiveness to be necessary because some federal officials attach a meaning to the term telecommuting other than that which is contemplated by OPM’s definition. The other meaning attached to the term involves traditional management decentralization initiatives, such as the establishment of local offices that produce benefits (including improved customer services and satisfaction) without necessarily being more geographically convenient to the employees providing the services. In developing this report, we obtained general information on flexiplace policies and views on flexiplace use from officials in the 17 departments and independent federal agencies with the greatest numbers of employees. Collectively, these departments and agencies employ about 95 percent of federal employees. From these 17 departments and agencies, we then judgmentally selected 5 departments and 3 independent agencies for a more detailed review, which forms the basis of this report. Our intention in selecting this sample was to include departments and independent agencies that (1) employed a large number of federal civilian personnel, (2) varied in the nature and extent of their experience with flexiplace, and (3) permitted examination of any variances in flexiplace policies and efforts to promote flexiplace. We did not attempt to determine, however, the extent to which flexiplace arrangements could or should have been undertaken or the effectiveness of existing arrangements. Because we did not use a representative sample, the results of this review cannot be projected to the entire federal workforce. We identified and analyzed 21 policy documents from the departments and agencies selected and visited and interviewed agency officials in 26 locations, mostly in agencies’ headquarters and in their field offices in Denver and San Francisco. The agency officials we interviewed were either flexiplace coordinators or other personnel knowledgeable about flexiplace in their agencies, and they predominantly worked in human resources departments, although a lesser number were program or office managers. During these interviews, we gathered information on the extent of flexiplace use, agencies’ identification of barriers to implementing flexiplace, and agency officials’ views on operational difficulties attributable to flexiplace. We did not seek to question or verify either the perceptions held by agency officials or the data provided on the use of flexiplace. In addition, we interviewed nine union representatives to solicit their views on flexiplace, and interviewed OPM, GSA, and DOT officials in Washington, D.C., to identify federal efforts to promote flexiplace. We also visited telecenters (which are facilities for use by the employees of many agencies as alternative work sites) in Virginia and California. Appendix I describes in detail the objectives, scope, and methodology of our review. Our review was conducted from June 1996 to May 1997 in accordance with generally accepted government auditing standards. We provided a draft of this report to the heads of the departments and agencies discussed in this report for their review and comments. Their comments are summarized at the end of this report. No specific statute exists that explicitly authorizes or forbids flexiplace. OPM has administratively determined that agencies can develop and implement flexiplace programs. President Clinton has also encouraged agencies to develop family-friendly programs, including flexiplace, through memorandums addressed to the heads of executive agencies in 1994 and 1996. OPM and GSA established instructional guidelines in 1990 to assist agencies in implementing flexiplace programs. These guidelines recommended that an agency should first identify reasons for establishing a program, and that program benefits should accrue to both the employer and the employee. According to OPM and GSA, reasons for agencies to establish flexiplace programs include improved recruiting and retention of employees, increased productivity, and a reduced need for office space. Reasons for employees to participate in flexiplace include the opportunity to reduce commuting time; lowered personal costs in areas such as transportation, parking, food, and wardrobe; improvement in the quality of worklife and morale accruing from the opportunity to balance work and family demands; and removal of barriers for those with disabilities who want to be part of the workforce. OPM and GSA guidelines stressed the fact that flexiplace is not a substitute for child care because young children can frequently produce distractions that prevent the successful completion of work at home. OPM updated the 1990 guidelines in 1993. In this update, OPM asserted that flexiplace is a management option rather than an employee benefit, and that flexiplace should be voluntary and should not change the terms and conditions of employment. OPM recommended that agencies develop written policies and procedures, appoint a flexiplace coordinator, conduct training sessions for flexiplace employees and their supervisors, and establish written work agreements that schedule flexiplace episodes. Although flexiplace is a management option, OPM recognized that under 5 U.S.C. Chapter 71, labor unions representing employees have the right to negotiate on the manner in which flexiplace programs are implemented and on the impact of programs on employees. OPM cautioned agencies that the nature of the work, together with the characteristics of both the employee and supervisor, must be suitable for flexiplace. OPM defined suitable work as tasks that can be conducted independently of the work location for at least part of the week. Work that requires extensive face-to-face contact, according to OPM, is generally unsuited for flexiplace. OPM also said that employees who participate in flexiplace programs should be well organized, highly disciplined self-starters who require little supervision and who have received at least fully successful ratings. OPM recommended that supervisors should be comfortable with managing by results rather than by observation. Since its January 1993 report on the results of the flexiplace pilot, OPM has continued to promote flexiplace to other federal departments and agencies. OPM maintains a Work and Family Program Center to promote flexiplace awareness by publishing leaflets on flexiplace resources, writing about flexiplace in newsletters, operating a computer bulletin board to disseminate and exchange flexiplace information, and offering workshops on flexiplace. OPM has also published descriptive brochures on flexiplace, continues to make available to federal agencies the results of the flexiplace pilot, and has recognized other agencies with awards for promoting work and family programs, including flexiplace. In addition, OPM has disseminated information through direct mailings to personnel directors and heads of executive departments and agencies. Also since 1993, GSA has promoted flexiplace through the establishment, management, and marketing of facilities that provide alternative office settings for federal employees who would otherwise travel longer distances to work. These facilities, known as telecenters, are equipped with modern workstations, telephones, computers, modems, and facsimile machines, and are generally shared by employees of multiple agencies. Initially established in Maryland and Virginia by fiscal year 1993 appropriations, federal telecenters were also established in Oklahoma City; Seattle; Chicago; Atlanta; Charles Town, West Virginia; and a number of northern and southern California communities. GSA has also established partnerships with local and municipal governments to arrange for the use of their telecenters by federal employees. A more detailed discussion of federal telecenters appears in appendix II of this report. Flexiplace gained additional promotional emphasis in 1993 as a result of a National Performance Review recommendation that the President issue a directive requiring agencies to implement flexiplace policies. The President’s July 1994 memorandum to the heads of executive departments and agencies had a family-friendly focus and encouraged these departments and agencies to develop flexible work arrangements, including flexiplace, and to adopt appropriate policies. Through a similar memorandum in 1997, Vice President Gore also encouraged agencies to increase opportunities to telecommute. Federal efforts to promote flexiplace were also linked to the Climate Change Action Plan issued by the President and Vice President in October 1993. The plan was, in part, a response to the threat of global warming and outlined directives aimed at decreasing U.S. greenhouse gas emissions, including transportation-associated pollution. One of these directives instructed DOT to implement a federal flexiplace pilot project with the goal of inducing 1 to 2 percent of federal employees to work at home at least 1 day per week. Since the plan’s inception, DOT has promoted flexiplace by publishing and distributing information to the public on flexiplace and by assisting GSA and the PMC in their efforts to promote flexiplace. In response to the Climate Change Action Plan, the PMC developed the National Telecommuting Initiative Action Plan in January 1996. The plan, developed by an Interagency Telecommuting Working Group cochaired by DOT and GSA, calls for increasing the number of federal telecommuters to 60,000 by the end of fiscal year 1998. This goal represents about 3 percent of the federal civilian workforce, a percentage roughly equivalent to conservative estimates of participation in the private sector. The plan is a multiphased project that calls for estimating current telecommuting participation, assessing logistics, promoting telecommuting, and implementing programs and pilots. Other members of the Working Group are the Departments of Agriculture (USDA), Defense, Education, Energy, Health and Human Services, the Interior, State, and Veterans Affairs; and EPA, Small Business Administration, Social Security Administration (SSA), and OPM. In June 1996, President Clinton issued a memorandum to heads of executive departments and agencies reaffirming his commitment to federal telecommuting usage. He also adopted the PMC’s national goal of achieving 60,000 federal telecommuters by the end of fiscal year 1998 and directed executive departments and agencies to review, develop, utilize, and expand opportunities for telecommuting so that the PMC’s goal would be attained. The 21 flexiplace policies we reviewed generally applied to employees in individual departmental or independent agencies, or in specific federal regions or locations, rather than to all employees in a department. About one-half of the employees at the 26 locations we visited were covered by flexiplace policies, but the majority of covered employees were in effect excluded from participating by some type of limitation in the policies. Some policies limited participation to employees who were medically disabled or in a specific occupation. In addition, policies generally prescribed the type of work to be done as tasks which could be performed away from the office and which were quantifiable or measurable. Most of the policies we reviewed varied in their coverage. Of the 21 policies we reviewed, 14 applied to personnel either within (1) headquarters, (2) a specific federal region, (3) more than one federal region, or (4) specific DOD locations. In headquarters, for example, DOL’s policy covered only selected Local Union 12 bargaining unit employees, within a flexiplace pilot, who worked in the Washington, D.C., area. Also, only EPA employees working in offices within federal regions 8 and 9 were covered by the EPA’s federal region 8 and 9 policies, respectively. In contrast to EPA’s regional policies, the DOT Office of Motor Carriers’ policy covered employees in offices within all federal regions. In addition, the Naval Air Weapons Center’s policy that we reviewed applied only to employees working at the Point Mugu, California, location. These 14 policies are described in tables III.2, III.3, III.4, and III.5. Although none of the policies were departmentwide in coverage, five agencies within three departments and two independent agencies had agencywide policies that covered all their employees in all geographic locations. These agencies were the Federal Aviation, Federal Highway, and Federal Railroad Administrations within DOT; the Natural Resources Conservation Service within USDA; the Defense Finance and Accounting Service within DOD; GSA; and SSA. These seven policies are described in table III.1. Although about 47,000 (47 percent) of the nearly 99,100 employees at the 26 locations we visited were covered by formal flexiplace policies, about 28,000 of these employees were in effect excluded from participation because of limitations within policies. For example, two of the policies we reviewed limited flexiplace participants mainly to medically disabled employees, which in effect excluded most employees covered by the policy from actually participating at any given point in time. To illustrate, of the estimated 4,000 employees in Denver who were covered by the Defense Finance and Accounting Service’s policy, 3 individuals who were disabled were allowed to temporarily work at home for periods during 1994 to 1996. Similarly, according to agency records, fewer than 25 of the 13,305 SSA headquarters employees participated under the flexiplace policy that limited participation to those with certain medical conditions. In addition, one policy that we reviewed limited participation to employees in a specific occupation. The memorandum of understanding between the National Treasury Employees Union (NTEU) and SSA management limited participation to attorney advisors in SSA’s Office of Hearings and Appeals. In the five locations we visited that had no formal flexiplace policies, the majority of the employees nevertheless had the potential ability to participate in flexiplace arrangements. For example, approximately 6,000 EPA headquarters employees were not covered by a formal policy because their unions had not yet approved management’s draft policy. Agency officials told us, however, that they generally allowed flexiplace participation and that about 50 headquarters employees occasionally worked at home under guidelines from a previous pilot. In contrast, about 4,662 employees in three of the five locations that were not covered by flexiplace policies worked in offices where agency officials said they generally did not permit employees to participate in flexiplace. In addition to containing restrictions that excluded employees from participating in flexiplace, most agency policies specified the type of work employees could perform while on flexiplace and the types of work arrangements that were permissible. Ten of the policies we reviewed specified the type of work that could be done while on flexiplace as tasks that could be accomplished away from the traditional office. In addition, 6 of these 10 also specified that work had to be quantifiable or measurable. Nineteen of the 21 policies we reviewed also specified the nature of flexiplace arrangements permitted. GSA and EPA recognized two basic types of arrangements: regular flexiplace, in which employees are to work a certain number of regularly scheduled days each week at an alternative workplace, and episodic flexiplace, in which employees are to work away from the office on a temporary basis for short periods of time to complete discrete projects. Twelve of the policies we reviewed permitted only regular flexiplace, while two policies allowed only episodic flexiplace, and five policies permitted both regular and episodic flexiplace. About an equal number of agencies reported that their personnel participated in episodic arrangements as opposed to regular arrangements, despite fewer policies permitting episodic flexiplace. The PMC estimated that about 9,000 federal employees out of approximately 2 million executive branch employees, or less than 0.5 percent, telecommuted in 1996. Although this estimate may not directly correlate with the 1993 estimated flexiplace participants, flexiplace participation does appear to have increased from the 3,000 to 4,000 estimated by OPM in 1993. Unrelated to the PMC’s estimate, agency estimates showed that nearly 5 percent of employees participated in flexiplace at agency locations we visited. Participation at these locations may have been higher than in the federal government in general because we purposely selected some locations that had active flexiplace programs. Agency officials reported that employees used flexiplace primarily for personal benefits but also to avoid office interruptions. These employees, according to agency officials, were in professional occupations, and they carried out such tasks as writing, reading, telephoning, and working on the computer while on flexiplace. A survey completed in July 1996 by the PMC’s Interagency Telecommuting Working Group indicated that telecommuting had increased since the completion of the flexiplace pilot in 1993. This survey requested members of the PMC and a number of smaller agencies to estimate the number of their telecommuting participants. From estimates supplied by 33 agencies, the PMC estimated that, governmentwide, 9,000 federal employees were telecommuting. The PMC estimate included participants who would fit within a broader definition of telecommuters but did not include all flexiplace participants. For example, SSA used PMC’s definition, which in some respects was broader than OPM’s. Under that definition, SSA reported a total of 1,939 telecommuters, including 800 personnel working at contact stations, which are small temporary SSA offices designed to directly serve the public, and 1,000 administrative law judges who traveled to various hearing offices. An SSA official said SSA counted administrative law judges and personnel working at contact stations as telecommuters because it considered these employees to be included in the mobile/virtual office category of the PMC’s telecommuting definition. This category consists of the activities of field representatives, mobile managers, inspectors, and traveling technical support employees—those who may work in multiple locations or environments, including customer sites, hotels, cars, or at home. According to an SSA official, these employees contribute to decreasing air pollution and traffic congestion and to increasing customer service, all of which are among the goals of PMC’s National Telecommuting Initiative. Conversely, DOL did not include all flexiplace participants in the estimate it supplied to the PMC. DOL’s estimate, which was used in the PMC estimate of 9,000 telecommuters, consisted entirely of 581 formal participants in 2 ongoing flexiplace pilots. Realizing that this estimate did not include a large number of field safety inspectors who were informally participating, DOL subsequently resurveyed the number of participants and determined that the total number of participants was actually 3,426. We also asked officials at the 26 locations we visited to estimate the number of their flexiplace participants, using OPM’s definition. According to the information they provided, nearly 5 percent of the approximately 99,100 employees at the 26 agency locations we visited participated in flexiplace. This information is summarized in figure 1 and presented for each of the 26 locations in appendix IV. Agency officials told us that employees’ use of flexiplace arrangements had various benefits. They said that employees reported benefiting by an increase in their productivity and morale, and a decrease in their commuting time, interruptions, sick leave use, and personal costs. Some agency officials said that flexiplace resulted in a decreased need for office space, an increased ability to recruit and retain employees, lessened environmental impacts, and greater opportunities for disabled employees. Often cited by agency officials as one of the main reasons for using flexiplace, productivity gains of professional staff are reportedly difficult to define, much less measure. Yet some organizations and some agencies we visited were able to measure productivity gains among some of their staff who used flexiplace. For data entry clerks, computer programmers, and word processors who produce measurable outputs, productivity gains in the neighborhood of 20 to 25 percent are attributed to telecommuting in the literature. Similarly, within SSA’s Office of Hearings and Appeals in Salt Lake City, a manager documented a 25-percent increase in the number of cases prepared by hearing assistants who worked under flexiplace arrangements. During OPM’s pilot, supervisors reported that 39 percent of their staff on flexiplace showed improved work output, and that 10 percent or fewer showed a decrease in output. Similarly, the combined results of DOL’s 2 pilots showed that 32 percent of the 238 supervisors believed that staff increased their quantity of work as a result of flexiplace, as opposed to about 14 percent who believed quantity dropped. Seventy-three percent of the 426 employees in these pilots believed their quantity of work increased under flexiplace. Agency officials we spoke with also reported reasons cited by employees for not using flexiplace. The most common reason cited was a feeling of isolation while working at home. Other reasons agency officials reported were the perception by employees that flexiplace could be career limiting, the presence of family members at home who would interrupt their work, the lack of adequate work space at home, and a lack of self-discipline. They told us that the best flexiplace participants are disciplined self-starters who need little supervision. Agency officials said that most employees using flexiplace were in professional occupations. They told us that the staff members most frequently using flexiplace were employed as engineers and engineering technicians, attorneys and paralegals, program and management analysts, computer personnel, investigators, and inspectors. Agency officials also said that flexiplace was used by personnel specialists, scientists, administrative personnel, technical information specialists, contract personnel, budget and financial analysts, accountants, architects, and employee development specialists. According to agency officials, employees reported that writing, reading, telephoning, and computer work were the most common tasks accomplished while on flexiplace. Other tasks that agency officials reported participants doing on flexiplace included analysis, reviewing and evaluating, preparing legal briefs and decisions, planning, and researching. Agency officials and union representatives told us that management resistance was the largest barrier to implementing flexiplace programs. They explained that some managers and supervisors resisted allowing staff to participate in flexiplace because they did not believe that employees were working unless they could see them. Almost half of the agency officials and union representatives that we interviewed cited lack of adequate equipment, such as computers and dedicated phone lines in the home, as a barrier. Fewer of them identified the nature of the job and handling of sensitive data as barriers. We did not attempt to determine the accuracy or appropriateness of these views. Management resistance has been frequently cited as an obstacle in the literature on telecommuting in the private sector, and it was recognized as a major impediment in the 1993 report on the flexiplace pilot. In their training guide for managing telecommuters, GSA and DOT pointed out that the role of management has changed from managing by observation to managing by results and that managers who resisted this change faced a major challenge in embracing flexiplace. Agency officials and union representatives we interviewed cited management resistance as the largest barrier to implementing flexiplace programs. Management resistance was cited as the largest barrier by 16 of the 28 agency officials and 7 of the 9 union representatives we interviewed. All but nine of the agency officials and all but two of the union representatives we interviewed said that management resistance was a problem in implementing flexiplace programs. Because OPM recommended that flexiplace participants be self-starters who need little supervision, several agency officials questioned why managers were resistant. They said that the behavior and work ethic of employees did not change when they worked at home, so managers should not worry about their ability to supervise these employees while they were on flexiplace. In the surveys of supervisors participating in DOL’s 2 flexiplace pilots, 77 percent of the 237 respondents reported that supervising an employee on flexiplace was about the same as, or compared favorably with, supervising the same employee prior to flexiplace. Several agency officials told us they had had success in overcoming management resistance by training supervisors or by exposing them to flexiplace arrangements. Supervisors in the DOL pilots mentioned earlier were both trained and exposed firsthand to flexiplace, and 73 percent of them said that they would want their staff to continue working under a flexiplace arrangement if given the opportunity. Although never cited as the largest barrier to implementing flexiplace, a lack of adequate equipment was identified as a barrier by 12 of the 28 agency officials and 4 of the 9 union representatives we interviewed. Agency officials said that budgetary constraints prevented them from buying computers and modems for flexiplace participants and from installing secondary phone lines in their homes for accessing the agency’s local area network. Some agencies solved this problem in part by lending participants surplus computers and laptops. Five of the 28 agency officials and 1 of the 9 union representatives believed that the nature of the job was a barrier to implementing flexiplace. They explained that some jobs, like receptionist and some clerical positions, required extensive face-to-face interaction with the public and with other employees and therefore were not amenable to flexiplace. Other jobs, such as air traffic controller and janitor, were site-dependent and could not be performed at alternative work sites. However, they said that most jobs had some tasks that could be performed away from the traditional office, and some managers suggested grouping these tasks into a single day to allow for a flexiplace arrangement. Five of the agency officials and one of the union representatives we interviewed said that the handling of sensitive data was a barrier. SSA officials said that claims representatives in the Office of Operations worked daily with databases containing financial information on applicants and that they believed the public would feel uncomfortable knowing that employees were using these data at home. These officials said that the databases could be accessed securely from employees’ homes, but that security measures would be expensive to install. Barriers less commonly cited by agency officials and union representatives included lack of a flexiplace policy, burdensome paperwork, and employee reluctance or indifference. Lack of a flexiplace policy was also cited as a barrier for some of the agencies that had no policy but nevertheless had a few flexiplace participants. Burdensome paperwork, according to agency officials, was associated with participants completing flexiplace work agreements. Employee reluctance reportedly arose from employees fearing that flexiplace participants were at a disadvantage for promotions because they were seen less in the office. Agency officials suggested these barriers could be overcome by establishing flexiplace policies, keeping associated paperwork to a minimum, and managing by results rather than by observation. Agency officials reported few operational difficulties as a result of flexiplace arrangements. Although agency officials told us that some managers initially feared participants would abuse flexiplace arrangements, these officials reported few instances of abuse. Of the approximately 4,700 personnel who were participating in flexiplace at the office locations we reviewed, agency officials mentioned only 6 definitive instances of abuse. Similarly, few problems with contacting employees, securing their attendance for important meetings, or coordinating employee coverage of the office at critical times were reported. Only one agency official said that productivity decreased as a result of flexiplace, whereas, as discussed previously, several officials believed that productivity increased. The majority of these agency officials were flexiplace coordinators within human resource departments and office or program managers. Due to time constraints, we did not contact individual supervisors who would have had more direct experience with supervising employees participating in flexiplace arrangements. The Departments of Agriculture, Defense, Housing and Urban Development (HUD), Labor, and Transportation, as well as EPA, GSA, OPM, and SSA, provided oral comments on a draft of this report. The agencies generally agreed with the report’s contents. GSA and SSA suggested that we point out that the PMC and OPM define telecommuting somewhat differently. We made revisions to various sections of the report to account for the different definitions. Some agencies provided comments of a technical nature, or to clarify points, which we have incorporated where appropriate. We are sending copies of this report to Representative James P. Moran, the original requestor; the Chairman of the Subcommittee on Civil Service, House Committee on Government Reform and Oversight; other interested congressional committees and members; the Secretaries of the Departments of Agriculture, Defense, Housing and Urban Development, Labor, and Transportation; the Administrator of the General Services Administration; the Directors of the Office of Management and Budget and the Office of Personnel Management; the Commissioner of the Social Security Administration; and other interested parties. We will also make copies available to others upon request. Major contributors to this report are listed in appendix V. If you have any questions about this report, please contact me on (202) 512-8676. This report responds to a request by Representative James P. Moran, the former Ranking Minority Member of the Subcommitte on Civil Service, House Committee on Government Reform and Oversight, that we review the implementation of flexiplace since completion of the 1990 to 1993 flexiplace pilot. Specifically, we agreed to (1) describe federal efforts to promote flexiplace; (2) review federal agencies’ policies and the extent to which they permit flexiplace; (3) determine the extent to which federal employees have used flexiplace, as well as the characteristics of these employees and the work they have done under flexiplace; (4) ascertain whether agencies and federal employees’ unions have identified any barriers that inhibit flexiplace implementation; and (5) determine whether agencies believe that flexiplace has caused any operational difficulties, including abuse of flexiplace. The term “flexiplace” was first coined during the pilot as an abbreviation for “flexible workplace.” Since the completion of the flexiplace pilot, OPM has adopted the term “telecommuting” to define workplace arrangements that allow an employee to work away from the traditional work site, either at home or at another approved alternative location. Although the terms “flexiplace” and “telecommuting” are often used interchangeably, for the purposes of this report, we use the term flexiplace only when describing work arrangements that are consistent with OPM’s definition. We found this restrictiveness to be necessary because some federal officials attach a meaning to the term telecommuting other than that which is contemplated by OPM’s definition. The other meaning attached to the term involves traditional management decentralization initiatives, such as the establishment of local offices that produce benefits (including improved customer services and satisfaction) without necessarily being more geographically convenient to the employees providing the services. To obtain general information on federal flexiplace programs within the executive branch, we contacted all cabinet-level departments and independent agencies with more than 10,000 employees as of June 1995. These 17 departments and independent agencies employed over 95 percent of the federal civilian workforce. From these departments and agencies, we obtained basic information on flexiplace policies and the extent to which their personnel used flexiplace. We also obtained estimates of flexiplace participation that were collected by the PMC from its members and from a number of smaller agencies. To describe federal efforts to promote flexiplace, we contacted and interviewed knowledgeable officials in the three agencies that we identified as having taken the lead in promoting flexiplace. We interviewed OPM, GSA, and DOT officials in Washington, D.C.; reviewed documents they provided; and scanned pertinent electronic bulletin boards and the Internet. We also visited GSA-sponsored telecenters in Virginia and California. We then judgmentally selected five departments and three independent agencies for a more detailed review. Because we did not use a representative sample, the results of this review cannot be projected to the entire federal workforce. The intent of our selection strategy was to obtain a mix of departments and agencies that varied in the nature and extent of their experience with flexiplace, encompassed a large number of federal civilian personnel, and permitted examination of any regional variations in flexiplace policies and efforts. We chose the Washington-Baltimore area because the headquarters of the departments and agencies we reviewed are located there and because we were told by GSA that about one-third of all flexiplace participants worked in this area. We selected San Francisco because it is the seat of federal region 9 and because of traffic and congestion problems in the city. We chose Denver because it is the seat of federal region 8 and is located in the interior of the country. The eight departments and independent agencies we selected had one or more components or offices in each of these three locations. In total, we visited 26 locations. We chose DOD because it has the largest number of civilian personnel. We chose GSA because of its lead role in promoting flexiplace through establishing telecenters, and we selected DOT because it promoted flexiplace to reduce transportation-associated pollution. We selected DOL based on the recommendation to review its program by knowledgeable officials in GSA. We chose EPA because the agency reported having varying local policies. We also selected several agencies based on their estimates of telecommuters supplied to the PMC. We chose SSA because it reported having the largest number of telecommuters, and we selected USDA and HUD because they reported having few or no telecommuters. To review federal policies and the extent to which they permitted flexiplace, we collected and examined written policies and guidelines from department and agency officials in headquarters and in field locations we visited. We did not examine any policies that were in draft form awaiting approval by agency officials. We reviewed flexiplace policies to determine the extent to which they addressed the types of employees allowed to participate, the types of work permitted, and the types of flexiplace arrangements allowed. When necessary, we contacted officials to clarify policy information. Because DOT and USDA delegated policy formulation to their component agencies, we requested that they each provide policies from their two largest civilian components, which excluded DOT’s Coast Guard, and from one agency recommended by department officials. In response, within DOT, we obtained policies from the Federal Aviation Administration, the Federal Highway Administration, and the Federal Railroad Administration. Likewise, within USDA, we obtained policies from the Forest Service and the Natural Resources Conservation Service. Because the Navy, one of DOD’s largest employers of civilian personnel, was recommended by DOD officials for our review, we asked agency officials also to submit policies from the two other largest departments employing civilian personnel: the Army and the Air Force. Neither the Army nor the Air Force had a final departmentwide policy in effect at the time of our review. The Navy supplied policies covering the employees at two California locations that they suggested we visit. To further describe the extent to which federal employees used flexiplace, to ascertain whether agencies identified any barriers to implementing flexiplace programs, and to determine whether agency officials believed flexiplace caused operational difficulties, we interviewed department and agency officials responsible for flexiplace oversight for each of the eight departments and independent agencies in the Washington-Baltimore area, Denver, and San Francisco. Most of these officials were flexiplace coordinators within human resource departments, but a smaller number were office or program managers. Due to time constraints, we did not survey or interview individual supervisors who may have had more direct experience with supervising employees participating in flexiplace arrangements. Also, we did not attempt to determine the extent to which flexiplace arrangements could or should have been undertaken or the effectiveness of existing arrangements. Further, we did not seek to question or verify perceptions held by agency officials or data provided on the use of flexiplace. Within the Washington-Baltimore area, we interviewed department and agency officials with Navy, Forest Service, EPA, GSA, DOL, HUD, SSA, and DOT. In Denver and San Francisco, we interviewed or contacted agency officials in SSA’s Office of Hearings and Appeals and its Office of Operations, and regional offices of HUD, GSA, EPA, the Forest Service, and the Federal Highway Administration. Because DOL had separate guidelines for flexiplace pilots in the field and in headquarters, we also interviewed the DOL flexiplace coordinator in Denver. Because the Navy had no large facilities in Denver, we contacted the flexiplace coordinator with the Defense Finance and Accounting Service Center, the largest DOD facility in Denver. We identified large DOD facilities in the San Francisco area as possible candidates for a site visit. However, it appeared that the nature of the work done at these sites would not be conducive to flexiplace arrangements. Therefore, at the recommendation of the Navy, we visited the Naval Surface Warfare Center in Port Hueneme, California, and the Naval Air Weapons Center in Point Mugu, California. To obtain additional information on barriers and operational difficulties, we conducted two additional interviews with knowledgeable departmental officials at DOD and USDA in Washington, D.C. We also interviewed nine union representatives with the American Federation of Government Employees and the National Federation of Federal Employees to solicit their views. At each of the eight departments and agencies that were included in our review, we interviewed agency officials knowledgeable about the telecommuting participation estimates provided to the PMC, to determine how they were calculated. At the 26 locations we visited, we obtained the agencies’ current estimates of flexiplace participation but did not verify their accuracy. We provided a draft of this report to the Departments of Agriculture, Defense, Housing and Urban Development, Labor, and Transportation, as well as to EPA, GSA, OPM, and SSA. Their comments are discussed in the body of this report. We did our work between June 1996 and May 1997 in accordance with generally accepted government auditing standards. The U.S. private sector and other countries began experimenting with telecenters several years before the first federal experiments. The first neighborhood telecenter opened in France in 1981, and others opened shortly thereafter in Sweden, Switzerland, Jamaica, Japan, and the United Kingdom. These early telecenters were established to slow the pace of rural-to-urban employee migration, to foster economic development, to capitalize on lower wages and operating costs in outlying areas, and to promote a less stressful environment. In 1985, Pacific Bell established the first telecenter in the United States. Federal telecenters were first established through appropriations for fiscal year 1993 when Congress designated $5 million to fund telecenters in Maryland and Virginia. Telecenter sites were selected based on GSA’s observation that 16,000 federal employees commuted at least 75 miles each way on congested roads in the Washington, D.C., metropolitan area. In the spring of 1993, GSA began working in partnership with state and local governments in the Washington area, and by December 1994, the Washington area had four telecenters—one each in Hagerstown, Maryland; Charles County, Maryland; Winchester, Virginia; and Fredericksburg, Virginia. These telecenters had a total of 80 workstations, 143 participants, and a 55 percent utilization rate. Twenty organizations in 10 executive branch departments and agencies used these 4 centers. Congress continued to fund telecenters through fiscal year 1996, establishing additional telecenters in the Washington area. As of February 1, 1997, there were nine GSA-funded and leased telecenters in the greater Washington, D.C., area. According to GSA, at least eight other centers are expected to be operating in the Washington area by the end of 1997. Telecenters in the Washington, D.C., pilot provide state-of-the art equipment that may be better than equipment provided by employers for use at the office or at home. Equipment can include cubicles, open work areas, some private offices, facsimile and copy machines, high speed personal computers and modems, printers, separate voice and data lines, local area networks, various software packages, and voice mail. Centers often have a site manager to offer technical help to users, and some centers offer video conferencing capabilities. Although none of the Washington area telecenters were affiliated with day care centers, eight of the nine telecenters were in close proximity to day care facilities. At least three of these telecenters were located within walking distance of day care centers. Other day care centers were within a 5- to 15-minute drive from the eight telecenters. According to a GSA official, GSA charged agencies participating in the Washington pilot a low of $25 per month for use of a single workstation 1 day per week, to $100 per month for use of a single workstation 5 days per week. He said that the fee covered all operating expenses except for long distance telephone charges. He also said that memorandums of understanding (MOU) were signed by participating agencies and GSA’s Office of Workplace Initiatives, and that these MOUs were administered by telecenter managers. These agreements described the number and type of workstations needed by agencies, the cost and billing procedure, the hours of operation, and the equipment to be provided at the telecenter. Employee supervision was the responsibility of the employee’s immediate supervisor. A GSA official anticipated that appropriations earmarked for the Washington area telecenters will be depleted by the end of fiscal year 1999, at which time it is planned that these telecenters will be self-supporting. He said that, in the interim, the cost to participating federal agencies will rise over a 3-year period until agencies incur 100 percent of the operating costs, which are approximately $500 per workstation per month. He said the future cost to participating federal agencies will be determined by each individual telecenter, but that this cost will be less than that for private sector participants. This official further said that, when this cost increase occurs, participating agencies will need to at least offset the increased charges by reconfiguring central office space and reducing facilities costs. Plans also call for the centers to be opened to the general public. In 1996, Congress enacted legislation allowing for the opening of telecenters to nonfederal employees if the centers are not fully utilized by federal employees. User fees comparable to commercial rates are to be charged. Telecenters can be utilized by either single employers or by many employers. The single employer telecenter is used by employees of only one firm, organization, or government entity. Single employer telecenters are typically used by large organizations that wish to assume a more decentralized structure and who already have multiple facilities in which excess space is available for use as telecenters. Multiemployer telecenters are typically used by more than one organization and can provide the opportunity for smaller organizations to participate in telecommuting without assuming the financial burden of establishing their own centers. According to a 1994 report by the Institute of Transportation Studies, University of California, Davis, in comparison to working at home, telecenters can provide greater security for confidential information and greater assurance to supervisors that employees are being productive. A telecenter coordinator said that managers who may not be enthusiastic about home-based flexiplace may be more supportive of employees working at telecenters because the setting is similar to an office environment. The report further said employers’ liability for personal injury may be better controlled at a telecenter than at home. A GSA official said telecenters have safeguards to ensure a safe work environment. A GSA interim report on federal interagency telecommuting centers said that telecenters can provide employees an alternative office setting that is nearer their home, thereby decreasing their commuting distance. Federal employees we interviewed who favor working at telecenters over working at home cited several advantages of telecenters. These included a better separation of home and work, the ability to socially and professionally interact with other people, access to high quality telecenter equipment, and the opportunity to work in a professional atmosphere. The University of California report said that telecenters can have community and environmental benefits as well. It said that, while home-based flexiplace requires no commuting time at all, commuting time to telecenters is less than to a central office, which reduces traffic congestion, air pollution, road repairs, and fuel consumption. The report also suggests that telecenter users can increase their support of the local economy and have more time for community involvement as a result of working in the local community. According to GSA, as of November 1996, of the 9,000 federal employees who were telecommuting, about 500 of these employees used telecenters nationwide. Of these participants, approximately 355 were in the Washington, D.C., area. Federal agencies in the Denver area reported an absence of federal telecenters in Denver because their use would result in no appreciable reduction in commuting time; Denver’s traffic is not as heavy as that in other major metropolitan areas, such as Washington and Los Angeles. A GSA official in San Francisco said that a shortage of federal funding has limited the establishment of telecenters in that region. A DOT official said that, in addition to this reason, interest in San Francisco telecenters has declined as the interest in home-based telecommuting has increased. The University of California, Davis, report suggests that one reason for this minimal use of telecenters nationwide is that management does not want to pay rent for telecenter space and also maintain central office space for telecommuters. The report further suggests that this barrier could be partially overcome by eliminating permanent personal work space for groups of telecenter users and instead renting work space at a telecenter for their use on a reservation basis. A regional GSA official told us that agencies are reluctant to reduce central office space without the assurance that telecenters will survive when federal appropriations are discontinued. Another GSA official said that federal agencies may not see any cost savings until they eliminate at least 10 to 20 workstations in their central offices. He added that decreasing agencies’ central office space will ensure the continuation of telecenters. He observed that this pattern of decreasing office space has existed in the private sector and has led to significant telecommuting in some major corporations. He pointed out that the latest national figures show 9 million telecommuters. As with other flexiplace arrangements, management resistance was cited by agency officials, as well as by the University of California, Davis, report, as a common barrier to both single and multiemployer telecenters. They indicated that, because managers believed they could not effectively supervise remote employees, telecommuting opportunities were often restricted to those workers with independent and professional jobs. Some agency officials also suggested that ensuring the security of proprietary information was a barrier in considering the use of telecenters. However, the University of California, Davis, report suggests that this barrier may be overcome with advanced technology and the use of private offices or secured file cabinets. In 1994, GSA established three emergency telecenters in Los Angeles after the Northridge earthquake, using emergency federal building funds. Three telecenters in the north and west ends of the city provided 98 workstations so that federal workers could avoid commuting on badly damaged roads into Los Angeles. According to GSA’s interim report on federal telecommuting centers, two of these centers closed at the end of 1994 due to high rental costs and low utilization. In March 1995, PMC’s National Telecommuting Initiative identified 30 additional cities for telecommuting projects based on such factors as air pollution, the potential for improved customer service, the size of the local federal community, and geography. As of February 1, 1997, 20 GSA-funded telecenters existed nationwide in cities such as Atlanta, Oklahoma City, Chicago, Seattle, and San Francisco. GSA also developed telecenter partnerships with state agencies such as the California Department of Transportation (Caltrans) to relieve traffic congestion, conserve energy, and improve air quality in the state of California. Partners in this effort included regional transportation management authorities, local economic development offices and redevelopment agencies, state and county fairs, community colleges, and public school systems. The regional GSA office also established telecenters in vacant federal office space in the San Francisco area. Work at home for no more than 3 days per week with a minimum of 2 days in the office (continued) Forest Service, headquarters (Washington, D.C., area) Navy, headquarters (Washington, D.C., area) Naval Surface Warfare Center, Port Hueneme, CA Naval Air Weapons Center, Point Mugu, CA DOD Finance and Accounting Service Center, Denver, CO EPA, headquarters (Washington, D.C., area) GSA, headquarters (Washington, D.C., area) DOL, agencies’ headquarters (Washington, D.C., area) Federal Highway Administration, headquarters (Washington, D.C., area) Federal Highway Administration, Region 8 Federal Highway Administration, Region 9 HUD, headquarters (Washington, D.C., area) Ronald Belak, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the implementation of flexiplace since the completion of the pilot, focusing on: (1) federal efforts to promote flexiplace; (2) federal agencies' policies and the extent to which they permit flexiplace; (3) the extent to which federal employees have used flexiplace, as well as the characteristics of these employees and the work they have done under flexiplace; (4) whether agencies and federal employees' unions have identified any barriers that inhibit flexiplace implementation; and (5) whether agencies believe that flexiplace has caused operational difficulties, including abuse of flexiplace. GAO noted that: (1) the Office of Personnel Management (OPM), General Services Administration (GSA), and Department of Transportation (DOT) have assumed lead roles in promoting flexiplace; (2) in addition, DOT and GSA provide leadership for an interagency working group formed as part of the National Telecommuting Initiative Action Plan in January 1996; (3) a goal of the plan is to increase the number of federal flexiplace participants by the end of fiscal year 1998 to 60,000 or about 3 percent of the federal civilian workforce, a percentage roughly equivalent to conservative estimates of telecommuting in the private sector; (4) DOT also promotes flexiplace and distributes flexiplace literature to the general public as part of its effort to decrease transportation-associated congestion and pollution; (5) the 21 policies GAO reviewed varied in their coverage, generally applying to personnel within individual departmental and independent agencies, one or more federal regions, or specific Department of Defense locations; (6) because of limitations within these policies, however, about 28,000 of the employees covered by flexiplace policies were, in effect, excluded from flexiplace participation; (7) limitations restricted participation to the medically disabled or members of a certain occupation; (8) in contrast, despite the absence of formal policies at five locations GAO visited, some of the managers there permitted flexiplace; (9) flexiplace use appears to have increased since OPM's 1993 estimate of 3,000 to 4,000 participants; (10) a survey completed in July 1996 by the President's Management Council estimated that there were 9,000 telecommuting participants; (11) agency officials told GAO that most flexiplace participants' occupational categories were professional in nature, such as engineer, attorney, management and program analyst, and computer specialist; (12) according to agency officials, writing, reading, telephoning, and computer work were the most common tasks performed by flexiplace participants; (13) agency officials and union representatives identified management resistance as the greatest barrier to implementing flexiplace programs; (14) they also recognized that some jobs do not lend themselves to flexiplace arrangements and cited other barriers, such as a lack of computers at alternative work sites, the handling of sensitive data, employee reluctance or indifference with regard to participation, and the lack of a formal flexiplace policy; and (15) agency officials believed that few operational arose from flexiplace.
The V-22 Osprey program was approved in 1982. The V-22 was being developed to meet joint service operational requirements that would satisfy various combat missions, including medium-lift assault for the Marine Corps, search and rescue for the Navy, and special operations for the Air Force. The program advanced into full-scale development in 1986. In December 1989, the Department of Defense (DOD) directed the Navy to terminate all V-22 contracts because, according to DOD, the V-22 was not affordable when compared to helicopter alternatives. DOD notified Congress that in order to satisfy the joint service requirements, the aircraft would require substantial redesign and testing. Congress continued to fund the program and in August 1992, the Acting Secretary of the Navy testified that a V-22 that met the joint service operational requirements could not be built with the funds provided. In October 1992, the Navy terminated the V-22 full-scale development contract and awarded a contract to begin engineering, manufacturing, and development (EMD) of a V-22 variant. During the FSD phase, five prototype aircraft were built. We have been monitoring the V-22 program for the past several years. Our reportsconsistently discussed testing and development issues such as weight, vibration, avionics, flight controls, landing gear, and engine diagnostic deficiencies. The current V-22 program, which entered EMD in 1992, is scheduled to proceed with developmental testing through 1999. During the EMD phase, the contractor is required to build four production representative aircraft to Marine Corps specifications and deliver them to Patuxent River Naval Air Station, Maryland, in 1997 for developmental and operational testing. Operational testing for the Marine Corps’ V-22 is scheduled to extend into fiscal year 2000. After completion of operational testing to determine whether the EMD aircraft will meet Marine Corps requirements, one of the aircraft will be remanufactured and tested to determine whether it will meet SOCOM requirements. Operational testing for the SOCOM variant is scheduled to extend through fiscal year 2002. In March 1997, one EMD aircraft was delivered to Patuxent Naval Air Station to begin developmental and operational testing. Three more aircraft are under construction and are expected to be delivered by October 1997. DOD approved the program to begin low-rate initial production (LRIP) in April 1997 and will purchase 25 V-22 aircraft in 4 LRIP lots of 5, 5, 7, and 8 through fiscal year 2000. Full-rate production is scheduled to begin in fiscal year 2001 and continue through fiscal year 2018. Initial operational capability (IOC) for the V-22 Marine Corps variant is scheduled for 2001 and in 2005 for the SOCOM version. IOC for the Navy V-22 aircraft has not yet been specified. Through fiscal year 1997, more than $6.5 billion has been provided for the program. The cost data reported in the December 31, 1996, V-22 Selected Acquisition Report (SAR) is different from the data in the program office submission to support the fiscal year 1998-99 President’s Budget. For example, the SAR indicates that average unit flyaway costs at program completion would be about $55.4 million, while the program office estimate for the President’s Budget shows that average unit flyaway cost will be about $57.5 million at program completion. Table 1 provides a comparison of the various cost estimates at different program milestones. (See app. I for a more detailed comparison.) Furthermore, the contractor is estimating that the average unit flyaway cost, in then-year dollars, for the V-22 will eventually get down to about $40.9 million. The contractor estimate is based on the assumption that the production quantities and cost will stabilize (commonly referred to as the production learning curve) at about the time that aircraft number 153 is produced. Thus, the contractor estimate of $40.9 million would occur at a point in time in the program when the program office estimate and the SAR indicate that the average unit flyaway cost would be about $53.9 million and about $51.8 million, respectively. These widely differing estimates indicate that the V-22 has not matured to the point that there can be reasonable confidence that the costs are stable. This is particularly true because, as discussed later, the aircraft design is not yet stable and further changes are expected as the test program continues. Resolution of performance and operational issues will likely increase V-22 program costs. In that regard, we and other organizations, such as the Congressional Budget Office and the Institute for Defense Analyses, have performed reviews of weapon systems over the years that have shown that, historically, the cost of major weapons programs increases by over 20 percent. At this point in the V-22 program, it is questionable whether the aircraft being produced will be able to meet the multi-mission requirements outlined in the current Operational Requirements Document (ORD). The following are some issues that must be resolved before a determination can be made as to whether the V-22 will satisfy the services’ stated requirements. The current Marine Corps medium-lift helicopter fleet, consisting of CH-46E and CH-53D helicopters, is aging and now has an average age of 24 to 27 or more years. Navy and Marine Corps documents indicate that this fleet is deficient in payload, range, and speed. In addition, the fleet is incapable of providing the operational performance needed by the Marine Corps. And, according to Marine Corps officials, the medium-lift aircraft inventory is well below what is required. While the V-22 is to replace the Marine Corps’ CH-46E and CH-53D helicopters, its payload capabilities have yet to be demonstrated. The ORD stipulates that the V-22 must be able to lift external loads up to 10,000 pounds. By comparison, the CH-46E and CH-53D are able to lift 8,000 to 12,000 pounds. Testing to evaluate the V-22’s lift capability, and to measure structural load/stresses/strains in flight and the operational capabilities to carry external cargo is planned to take place in fiscal year 1998. Moreover, it has yet to be determined if the high-speed capability of the V-22 will enhance the Marine Corps’ external lift capabilities, since the airborne behavior of operational equipment such as multi-purpose vehicles, heavy weapons, and cargo vehicles carried at speeds at or in excess of 200 knots has yet to be tested. If the V-22 cannot rapidly move operational equipment, then its utility as an external cargo carrier to replace current Marine Corps medium-lift assets will have to be reevaluated. The V-22 ORD requires that, at a minimum, the CV-22 have the capability to fly at 300 feet using terrain following/terrain avoidance, in all weather conditions during both daylight and night-time environments. Testing done with the FSD prototype V-22 aircraft has shown that the AN/APQ 174 multi-functional radar, which would provide this capability, interferes with the V-22’s radar jamming system. Further EMD aircraft testing with the AN/APQ 174 radar system is necessary to resolve this issue. That testing is not scheduled to be completed until the middle of fiscal year 2001. According to the ORD, the V-22 must have an aerial refueling receiver capability compatible with current Marine Corps and SOCOM tanker assets. SOCOM personnel told us that it was vital for both the pilot and the co-pilot to be able to see the probe during aerial refueling. However, the current V-22 design prevents the pilot in the left seat of the aircraft from being able to see the refueling probe. Testing to date with the full-scale development version of the aircraft shows that the pilot in the left seat must either raise the seat or lean forward in the seat to clearly see the refueling probe. According to SOCOM officials, being able to readily see the refueling probe from both pilot seats without the pilot having to make these physical adjustments is essential to safe flight operations. From a mission and training point of view, these officials claim that it is critical that both pilots be able to see the entire refueling operation in the event that the pilot in the left seat has to take over the operation. While SOCOM pilots perform significantly more missions requiring refueling, Marine Corps officials told us that they believe that as long as the pilot in the right seat can clearly see the probe, the pilot in the left seat could make necessary adjustments to safely conduct the refueling mission should the need arise. V-22 program officials have agreed that if future testing shows that the current design of the refueling probe is a problem, necessary steps will be taken to correct the baseline aircraft. However, if a redesign is necessary, it could have an impact on aircraft performance (weight, range, and speed) or other aircraft systems, such as the radar. The downward force from the V-22 proprotor blades while in the hover mode (referred to as downwash) continues to be an area of concern. Downwash is a concern for both the Marine Corps and SOCOM in areas such as personnel insertions/extractions, external load hookups, fast rope exercises, and rope ladder operations. According to DOD documentation, the extremely intense rotor downwash under the aircraft makes it a challenge to stand under the aircraft, let alone perform useful tasks. According to the DOD Director, Operational Test and Evaluation report issued in March 1997, resolution of this issue will require further testing. Program officials told us that downwash is a common concern with rotary aircraft and V-22 users will have to adjust mission tactics while under the aircraft to compensate for downwash. Survivability is a critical concern as the services seek to perform their missions, particularly in hostile environments. The V-22 ORD defines the necessary capabilities that must be available on each configuration of the aircraft. However, our review showed that in order for the aircraft to meet key performance parameters, such as range, trade-offs are being considered. Critical subsystems may be delayed or deleted, while others may require future upgrades or modifications that may affect the program’s cost and schedule. One such subsystem is the AN/AVR-2A laser-warning receiver. By giving the pilot advance warning, this subsystem would reduce the susceptibility of the aircraft to laser illumination and attacks. The ORD requires that consideration be given to protecting crew and electro-optical sensors from low- to medium-powered lasers. While the Marine Corps V-22 aircraft will have this capability, the SOCOM V-22 aircraft will only have space and wiring provisions. Currently, the SOCOM variant will not have the laser-warning receiver because, according to SOCOM officials, it would prevent the aircraft from meeting its range requirements. In that regard, the V-22 ORD states that a key performance parameter for the SOCOM variant is the requirement for a mission radius of 500 nautical miles; that is, the aircraft must have the ability to fly from a base station out to 500 nautical miles, hover for 5 minutes, and return. According to SOCOM officials, the V-22 will not meet this range requirement with the laser-warning subsystem installed. SOCOM officials contend that the lack of the laser warning receiver is a concern relative to successful mission accomplishment and survivability of aircraft and crew. Another survivability concern is the lack of a defensive weapon on the V-22. The requirement document states that the V-22 must have an air-to-ground and air-to-air weapon system compatible with night vision devices. This is a required capability for the Marine Corps variant and a desired capability for the SOCOM variant. Originally, the V-22 was to be equipped with a 50-caliber machine gun; however, for affordability reasons, it will now be produced without a defensive weapon system. Finally, the ORD requires that the V-22 include a ground collision avoidance and warning system with voice warning. Currently, the Navy claims that this requirement was added to the ORD after the V-22 had validated its design and, therefore, was not included in the planned production. Instead, the system is a potential limitation to the Marine Corps’ V-22 configuration and will be included as a preplanned product improvement to be evaluated through the course of the test program. The Navy intends to correct this deficiency, most likely through a retrofit process, and pay for it within program baseline funding. The V-22 program was approved to proceed with LRIP in April 1997. One of the primary criterion that the program was required to meet was the completion of an operational assessment endorsing potential operational effectiveness and suitability of the V-22’s EMD design. Three series of early operational assessments were used to support DOD’s LRIP decision. Due to the significant limitations of these early operational assessments, their reliability as the basis for deciding to proceed into LRIP is questionable and future production decisions should be based on more realistic tests. The three operational assessments that have been conducted used aircraft produced under the earlier full-scale development program. Previously, DOD had determined these aircraft to be incapable of meeting V-22 mission requirements and, at one point, the Secretary of Defense sought to cancel the full-scale development program. V-22 program officials believe that even though the full-scale development aircraft did not meet mission requirements, the lessons learned from having produced them reduced the risk associated with developing the current EMD aircraft. The first of the three early operational assessments was conducted between May and July 1994; the second assessment between June and October 1995. These assessments were conducted jointly by the Navy’s Operational Test and Evaluation and the Air Force’s Operational Test and Evaluation Center. In both assessments, the joint test teams concluded that the development aircraft demonstrated the potential to be operationally effective and suitable. Although the third assessment was not completed at the time of the decision to proceed with LRIP, an interim report was prepared for this milestone. This report highlighted limitations and risks remaining from previous assessments and cited additional areas of concern, but still projected that the V-22 will be potentially operationally effective and suitable. In March 1997, DOD’s Director for Operational Test and Evaluation issued the Fiscal Year 1996 Annual Report. In that report, the Director, Operational Test and Evaluation, concluded that V-22 testing had concentrated on system integration and flight envelop expansion, but had “not extensively investigated mission applications of tiltrotor technology and potential operational effectiveness and suitability of the EMD V-22.” The report also highlighted the following operational test and evaluation limitations relative to the operational assessments of the V-22. The aircraft was not cleared to hover over unprepared landing zones, could not hook up to or carry any external loads, could not carry any passengers, and was not cleared to hover over water. The Director, Operational Test and Evaluation report also stated that the aircraft configuration was not representative of any mission configuration. The Director, Operational Test and Evaluation said this combination of limitations to clearance and configuration results in an “extremely artificial” test environment for early operational test and evaluation. The Director, Operational Test and Evaluation also reported serious concerns regarding the effects of downwash previously mentioned in this report and recommended further evaluation in this area. The initial flight of the first of four EMD aircraft, originally scheduled for December 1996, was delayed until February 1997. As a result, the required ferry to Patuxent River was delayed until March 1997. The aircraft arrived at the test facility needing several changes before the test program could continue as planned. In order to meet the ferry date and thus obtain approval to proceed with LRIP, component changes and modifications were not completed at the contractor’s facility. Instead, they were to be completed at Patuxent River after the required ferry flight. During a visit to the Naval Air Station test facilities in April 1997, we observed the aircraft undergoing modifications by contractor personnel. According to test officials with whom we spoke, the modifications were originally only expected to take about 2 weeks. However, as of June 16, 1997, the modifications were still ongoing, nearly 2 months after they began. The next major milestone decision for the V-22 is the LRIP lot 2 production decision. That decision is scheduled for early 1998 and will represent DOD’s approval to procure the next five V-22 aircraft. The criteria that must be met for LRIP lot 2 approval are: delivery of two additional EMD aircraft and completion of certain static tests to determine the structural strength of the aircraft. Congressional committees have expressed concern that the planned V-22 production schedule (4 LRIP lots of 5, 5, 7, and 8 aircraft with eventual full-rate production of as many as 31 aircraft per year through 2018) is inefficient. (See app. I for complete V-22 program schedule and cost estimates.) In August 1996, the contractors submitted an unsolicited cost estimate to the Under Secretary of Defense for Acquisition and Technology that suggested that accelerated production rates, combined with a multi-year procurement strategy, could result in savings of nearly 25 percent over the life of the V-22 program. The contractor proposed accelerating the production schedule to a rate of 24 aircraft by fiscal year 1999, instead of the 7 aircraft currently planned in fiscal year 1999. DOD responded that while this strategy had the potential for significant savings, it was inappropriate to consider such an alternative until the aircraft design was more stable. DOD indicated that to do otherwise would unnecessarily increase technical risk to the program. In addition, DOD stated that such an increase in annual procurement quantities would not be affordable within the overall defense budget. Further, the May 1997 Quadrennial Defense Review recommended lowering the number of V-22 aircraft to be procured from 523 to 458 and increasing the planned production rate after the program enters full-rate production. The recommendation retains the limited LRIP rates currently planned by DOD. According to V-22 program test personnel, accelerating the production schedule and increasing the rate would add risk to the program in the event the test program finds problems that require a significant amount of time and resources to fix, and result in a larger number of aircraft to retrofit or modify. These views are consistent with the conclusions in our February 13, 1997, report that described the effects of increased production during LRIP of 28 weapon systems and the cost and schedule impact to these programs. This report showed that when DOD inappropriately placed priority on funding production of unnecessary quantities during LRIP, the result was a large number of untested weapons that subsequently had to be modified. Moreover, it points out that because of overall budgetary constraints, decisions to buy unnecessary quantities of unproven systems under LRIP forced DOD to lower the annual full production rates of proven weapons thereby stretching out full-rate production for years and increasing unit production costs by billions of dollars. There is no consensus on the acquisition strategy for acquiring the V-22 Osprey. Congress has been attempting to increase the annual production rates to achieve more efficient production and DOD has been attempting to keep the annual production rates at a more limited quantity. The key to efficient production and the efficient use of the funds Congress has provided for the V-22 is program stability. However, after 15 years of development effort, the V-22 design has not been stabilized. To begin the process of achieving consensus on the acquisition strategy for the V-22, we believe that DOD needs to present Congress with a strategy for overcoming the production inefficiencies that Congress views as present in the current acquisition strategy. As part of that strategy, we believe that DOD needs to introduce more realistic testing into the program to achieve aircraft design stability. Ideally, this testing should be done as early as possible in the program schedule and should be directed at ensuring that the required capabilities of the V-22 are adequately demonstrated before a significant number of aircraft are procured. In that regard, the next scheduled major program milestone is the LRIP lot 2 production decision scheduled for early 1998. Accordingly, we recommend that the Secretary of Defense provide in the Department’s next request for V-22 funds an explanation of how it plans to (1) introduce more realistic testing earlier into the V-22 program schedule and (2) achieve the production efficiencies desired by Congress. An agreement between Congress and DOD in this regard would be a significant step toward reaching consensus on the acquisition strategy for the V-22 program. DOD reviewed and partially concurred with a draft of this report. In its comments, DOD agreed to continually assess and correct operational deficiencies found during V-22 testing. However, DOD did not concur with our recommendation to provide Congress an explanation of how it plans to introduce more realistic testing earlier into the V-22 program schedule and achieve production efficiencies. DOD stated that it considers test results, production efficiencies, and other factors in developing its budget and does not consider additional explanatory materials necessary. DOD also stated that the Defense Acquisition Board, in April 1997, determined that the V-22 test program was adequate and properly sequenced. We continue to believe that the V-22 test program and the criteria for proceeding with the low-rate production program should be made more realistic. Given the artificial nature of the prior operational testing that was used to justify LRIP lot 1 production and the fact that earlier tests were conducted using nonproduction representative aircraft developed under the earlier V-22 full-scale development program, we believe that DOD should expand the LRIP lot 2 criteria to introduce more realistic testing into the program, using aircraft produced under the EMD phase of the program. We believe that at a minimum, the limitations of the prior tests, which were disclosed by the Director, Operational Test and Evaluation in its March 1997 report, should be addressed before a decision is made to proceed into the next LRIP lot. This would allow the test program to validate the projected capabilities of the EMD-configured aircraft without injecting unnecessary risk into the program. DOD also emphasized in its comments on our draft report that the Quadrennial Defense Review (QDR) resulted in an accelerated production profile that addresses many of the production efficiencies desired by Congress. The QDR recommends an overall reduction in aircraft for the Marine Corps, from 425 aircraft to 360 with an increase in the rate of production during the full production phase of the program. The four low-rate production lots of 4, 5, 7, and 8 aircraft planned during the period 1997-2000 are retained. It is during this LRIP phase of the program that we believe more realistic testing is needed and should be included as criteria for procuring the next EMD LRIP lots. Therefore, we believe our position is consistent with the intent of the QDR recommendation, which would not take effect until the full-rate production phase of the V-22 program. DOD’s comments and our evaluation of them are presented in their entirety in appendix II. We reviewed the status of the V-22 aircraft development and readiness of the program to proceed into production. We reviewed and analyzed test plans and reports, including the Test and Evaluation Master Plan and results of three V-22 Operational Assessments; cost and budget estimates, including the SAR and President’s Budget Estimates for fiscal years 1997-99; and other program documentation, including the ORD and the EMD and LRIP contracts. We also obtained information on Marine Corps medium-lift requirements and capabilities of existing assets. In addition, we met with officials in the office of the Secretary of Defense and conducted interviews with program officials from the following locations: U.S. Navy Headquarters, Washington, D.C.; U.S. Marine Headquarters, Arlington, Virginia; Office of the Chief of Naval Operations, Washington, D.C.; U.S. Special Operations Command, Tampa, Florida; V-22 Program Office, Crystal City, Virginia; and Naval Air Warfare Station, Patuxent River, Maryland. Finally, we visited contractor facilities at Boeing Defense and Space Group-Helicopters Division, Philadelphia, Pennsylvania, and Bell Helicopter Textron, Fort Worth, Texas. We performed our review from March 1996 through June 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of the Navy; the Secretary of the Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to others on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report were Steven F. Kuhta, Assistant Director; Samuel N. Cox, Evaluator-in-Charge; and Brian Mullins, Senior Evaluator. (Then-year dollars in millions) The following are GAO’s comments on the Department of Defense’s (DOD) letter dated August 27, 1997. 1. We recalculated the cost data obtained from the V-22 Selected Acquisition Report, using DOD inflation indices, to reflect then-year dollars for comparison to program office budget estimates. The recalculated cost data are reflected in the final report. 2. We agree that the Operational Requirements Document (ORD) validated by the Joint Requirements Oversight Council in June 1995 does not specify an airspeed requirement for carrying external loads. However, the V-22 program was justified on the basis that it would overcome the shortcomings of the Marine Corps’ current medium-lift helicopters. In that regard, the ORD is specific in identifying inadequate payload, range, speed and survivability in the current medium-lift force that severely limit the Marine Corps’ ability to accomplish the assault support missions in current and future threat environments. We also agree that the ORD does not identify the specific equipment that the V-22 must have to protect the aircraft and crew from laser threats. However, the ORD does require that the aircraft be designed for operations in a hostile environment with features that increase aircraft, crew, and passenger survivability. Specifically, it requires that consideration be given to protecting crew and electro-optical sensors from low- to medium-powered lasers. While the MV-22 will be equipped with an AN/AVR-2A laser-warning receiver, the CV-22 will not be so equipped. Instead, the aircraft will be produced with available space and wiring for installation of laser protection capabilities. 3. We note that the approved CV-22 exit criteria is as follows: For lot 1 advanced procurement funding, flight testing of the first of two CV-22 flight test aircraft must have started. For lot 1 full funding and advanced procurement for lot 2, flight testing with the second CV-22 aircraft must have started and the terrain following/terrain avoidance testing must have started using the first CV-22 aircraft. We question the value of “flight test started” as sufficient criteria for making an informed decision to proceed with production of the CV-22 model aircraft. 4. This comment is consistent with the discussion in the report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reported on the V-22 Osprey Program, which is intended to provide the armed services with 523 new tilt-rotor aircraft, focusing on: (1) the status of the program and areas of potential cost increases or performance challenges; and (2) whether the aircraft being developed will meet the stated requirements of each of the services. GAO noted that: (1) the V-22 has been in development for almost 15 years; (2) although Congress has provided significant funding and support to the Department of Defense (DOD), the system has not yet achieved program stability in terms of cost or aircraft design; (3) there are large disparities among the cost estimates from the program office, the contractors, and the Office of the Secretary of Defense; (4) these estimates range from about $40 million to $58 million for each aircraft; (5) the design of the aircraft will not be stabilized until further testing is completed and several important performance and operational issues, such as payload capability, aerial refueling, and downwash are resolved; (6) resolution of these issues, which could also require mission trade-offs or changes to planned operational concepts, will likely escalate program costs and extend the program schedule; (7) the April 1997 low-rate initial production (LRIP) decision was based, in large part, on the results of early operational testing using aircraft produced under an earlier full-scale development program; (8) however, those aircraft are not representative of the aircraft currently being developed during the engineering and manufacturing development phase of the V-22 program; (9) furthermore, the DOD Director, Operational Test and Evaluation (DOT&E), has characterized the tests on which the LRIP decision was based as extremely artificial because of significant test limitations; and (10) future production decisions for the V-22 should be based on more realistic testing.
Social Security provides retirement, disability, and survivor benefits to insured workers and their dependents. Insured workers are eligible for reduced benefits at age 62 and full retirement benefits between age 65 and 67, depending on their year of birth. Social Security retirement benefits are based on the worker’s age and career earnings, are fully indexed for inflation after retirement, and replace a relatively higher proportion of wages for career low-wage earners. Social Security’s primary source of revenue is the Old Age, Survivors, and Disability Insurance (OASDI) portion of the payroll tax paid by employers and employees. The OASDI payroll tax is 6.2 percent of earnings each for employers and employees, up to an established maximum. One of Social Security’s most fundamental principles is that benefits reflect the earnings on which workers have paid taxes. Social Security provides benefits that workers have earned to some degree because of their contributions and those of their employers. At the same time, Social Security helps ensure that its beneficiaries have adequate incomes and do not have to depend on welfare. Toward this end, Social Security’s benefit provisions redistribute income in a variety of ways—from those with higher lifetime earnings to those with lower ones, from those without dependents to those with dependents, from single earners and two-earner couples to one-earner couples, and from those who don’t live very long to those who do. These effects result from the program’s focus on helping ensure adequate incomes. Such effects depend to a great degree on the universal and compulsory nature of the program. According to the Social Security Trustees’ 2005 intermediate, or best- estimate, assumptions, Social Security’s cash flow is expected to turn negative in 2017. In addition, all of the accumulated Treasury obligations held by the trust funds are expected to be exhausted by 2041. Social Security’s long-term financing shortfall stems primarily from the fact that people are living longer and having fewer children. As a result, the number of workers paying into the system for each beneficiary has been falling and is projected to decline from 3.3 today to about 2 by 2030. Reductions in promised benefits and/or increases in program revenues will be needed to restore the long-term solvency and sustainability of the program. About one-fourth of public employees do not pay Social Security taxes on the earnings from their government jobs. Historically, Social Security did not require coverage of government employment because there was concern over the question of the federal government’s right to impose a tax on state governments, and some had their own retirement systems. However, virtually all other workers are now covered, including the remaining three-fourths of public employees. The 1935 Social Security Act mandated coverage for most workers in commerce and industry, which at that time comprised about 60 percent of the workforce. Subsequently, the Congress extended mandatory Social Security coverage to most of the excluded groups, including state and local employees not covered by a public pension plan. The Congress also extended voluntary coverage to state and local employees covered by public pension plans. Since 1983, however, public employers have not been permitted to withdraw from the program once they are covered. Also in 1983, amendments to the Social Security Act extended mandatory coverage to newly hired federal workers and to all members of the Congress. SSA estimates that in 2004 nearly 5 million state and local government employees, excluding students and election workers, are not covered by Social Security. In addition, about three-quarters of a million federal employees hired before 1984 are also not covered. Seven states—California, Colorado, Illinois, Louisiana, Massachusetts, Ohio, and Texas—account for 71 percent of the noncovered payroll. Most full-time public employees participate in defined benefit pension plans. Minimum retirement ages for full benefits vary. However, many state and local employees can retire with full benefits at age 55 with 30 years of service. Retirement benefits also vary, but they are usually based on a specified benefit rate for each year of service and the member’s final average salary over a specified time period, usually 3 years. For example, plans with a 2 percent rate replace 60 percent of a member’s final average salary after 30 years of service. In addition to retirement benefits, members generally have a survivor annuity option and disability benefits, and many receive some cost-of-living increases after retirement. In addition, in recent years, the number of defined contribution plans, such as 401(k) plans and the Thrift Savings Plan for federal employees, has been growing, and such plans are becoming a relatively more common way for employers to offer pension plans; public employers are no exception to this trend. Even though noncovered employees may have many years of earnings on which they do not pay Social Security taxes, they can still be eligible for Social Security benefits based on their spouses’ or their own earnings in covered employment. SSA estimates that nearly all noncovered state and local employees become entitled to Social Security as workers, spouses, or dependents. However, their noncovered status complicates the program’s ability to target benefits in the ways it is intended to do. To address the fairness issues that arise with noncovered public employees, Social Security has two provisions—the Government Pension Offset, to address spouse and survivor benefits, and the Windfall Elimination Provision, to address retired worker benefits. Both provisions depend on having complete and accurate information that has proven difficult to get. Also, both provisions are a source of confusion and frustration for public employees and retirees. Under the GPO provision, enacted in 1977, SSA must reduce Social Security benefits for those receiving noncovered government pensions when their entitlement to Social Security is based on another person’s (usually a spouse’s) Social Security coverage. Their Social Security benefits are to be reduced by two-thirds of the amount of their government pension. Under the WEP, enacted in 1983, SSA must use a modified formula to calculate the Social Security benefits people earn when they have had a limited career in covered employment. This formula reduces the amount of payable benefits. Regarding the GPO, spouse and survivor benefits were intended to provide some Social Security protection to spouses with limited working careers. The GPO provision reduces spouse and survivor benefits to persons who do not meet this limited working career criterion because they worked long enough in noncovered employment to earn their own pension. Regarding the WEP, the Congress was concerned that the design of the Social Security benefit formula provided unintended windfall benefits to workers who had spent most of their careers in noncovered employment. The formula replaces a higher portion of preretirement Social Security covered earnings when people have low average lifetime earnings than it does when people have higher average lifetime earnings. People who work exclusively, or have lengthy careers, in noncovered employment appear on SSA’s earnings records as having no covered earnings or a low average of covered lifetime earnings. As a result, people with this type of earnings history benefit from the advantage given to people with low average lifetime earnings when in fact their total (covered plus noncovered) lifetime earnings were higher than they appear to be for purposes of calculating Social Security benefits. Both the GPO and the WEP apply only to those beneficiaries who receive pensions from noncovered employment. To administer these provisions, SSA needs to know whether beneficiaries receive such noncovered pensions. However, SSA cannot apply these provisions effectively and fairly because it lacks this information, according to our past work. In response to our recommendation, SSA performed additional computer matches with the Office of Personnel Management to get noncovered pension data for federal retirees. These computer matches detected payment errors; we estimate that correcting these errors will generate hundreds of millions of dollars in savings. However, SSA still lacks the information it needs for state and local governments and therefore it cannot apply the GPO and the WEP for state and local government employees to the same degree that it does for federal employees. The resulting disparity in the application of these two provisions is yet another source of unfairness in the final outcome. In our testimony before this committee in May 2003, we recommended that the Congress consider giving the Internal Revenue Service (IRS) the authority to collect the information that SSA needs on government pension income, which could perhaps be accomplished through a simple modification to a single form. Earlier versions of the Social Security Protection Act of 2004 contained such a provision, but this provision was not included when the final version of the bill, was approved and signed into law. In recent years, various Social Security reform proposals that would affect public employees have been offered. Some proposals specifically address the GPO and the WEP and would either revise or eliminate them. Still other proposals would make coverage mandatory for all state and local government employees. The GPO and the WEP have been a source of confusion and frustration for the more than 6 million workers and 1.1 million beneficiaries they affect. Critics of the measures contend that they are basically inaccurate and often unfair. For example, some opponents of the WEP argue that the formula adjustment is an arbitrary and inaccurate way to estimate the value of the windfall and causes a relatively larger benefit reduction for lower-paid workers. In the case of the GPO, critics contend that the two- thirds reduction is imprecise and could be based on a more rigorous formula. A variety of proposals have been offered to either revise or eliminate the GPO or the WEP. While we have not studied these proposals in detail, I would like to offer a few observations to keep in mind as you consider them. First, repealing these provisions would be costly in an environment where the Social Security trust funds already face long-term solvency issues. According to the most recent estimates from SSA eliminating the GPO entirely would cost $32 billion over 10 years and cost 0.06 percent of taxable payroll, which would increase the long-range deficit by about 3 percent. Similarly, eliminating the WEP would cost nearly $30 billion and increase Social Security’s long-range deficit by 3 percent. Second, in thinking about the fairness of the provisions and whether or not to repeal them, it is important to consider both the affected public employees and all other workers and beneficiaries who pay Social Security taxes. For example, SSA has described the GPO as a way to treat spouses with noncovered pensions in a fashion similar to how it treats dually entitled spouses, who qualify for Social Security benefits on both their own work records and their spouses’. In such cases, spouses may not receive both the benefits earned as a worker and the full spousal benefit; rather they receive the higher amount of the two. If the GPO were eliminated or reduced for spouses who had paid little or no Social Security taxes on their lifetime earnings, it might be reasonable to ask whether the same should be done for dually entitled spouses who have paid Social Security on all their earnings. Otherwise, such couples would be worse off than couples that were no longer subject to the GPO. And far more spouses are subject to the dual entitlement offset than to the GPO; as a result, the costs of eliminating the dual entitlement offset would be commensurately greater. Making coverage mandatory for all state and local government employees has been proposed to help address the program’s financing problems. According to Social Security actuaries, doing so for all newly hired state and local government employees would reduce the 75-year actuarial deficit by about 11 percent. Covering all the remaining workers increases revenues relatively quickly and improves solvency for some time, since most of the newly covered workers would not receive benefits for many years. In the long run, however, benefit payments would increase as the newly covered workers started to collect benefits. Overall, this change would still represent a net gain for solvency, although it would be small. In addition to considering solvency effects, the inclusion of mandatory coverage in a comprehensive reform package would need to be grounded in other considerations. In recommending that mandatory coverage be included in the reform proposals, the 1994-1996 Social Security Advisory Council stated that mandatory coverage is basically “an issue of fairness.” Its report noted that “an effective Social Security program helps to reduce public costs for relief and assistance, which, in turn, means lower general taxes. There is an element of unfairness in a situation where practically all contribute to Social Security, while a few benefit both directly and indirectly but are excused from contributing to the program.” Moreover, mandatory coverage could improve benefits for the affected beneficiaries, but it could also increase pension costs for the state and local governments that would sponsor the plans. The effects on public employees and employers would depend on how states and localities changed their noncovered pension plans to conform with mandatory coverage. For example, Social Security offers automatic inflation protection, full benefit portability, and dependent benefits, which are not available in many public pension plans. Creating new pension plans that kept all the existing benefit provisions but added these new ones would increase the cost of the total package. Under this scenario, costs could increase by as much as 11 percent of payroll, depending on the benefit packages of the new plans. Alternatively, states and localities that wanted to maintain level spending for retirement would likely need to reduce some pension benefits. Additionally, states and localities could require several years to design, legislate, and implement changes to current pension plans. Mandating Social Security coverage for state and local employees could also elicit a constitutional challenge. Finally, mandatory coverage would not immediately address the issues and concerns regarding the GPO and the WEP. If left unchanged, these provisions would continue to apply for many years to come for existing employees and beneficiaries. Still, in the long run, mandatory coverage would make these provisions obsolete. In conclusion, there are no easy answers to the difficulties of equalizing Social Security’s treatment of covered and noncovered workers. Any reductions in the GPO or the WEP would ultimately come at the expense of other Social Security beneficiaries and taxpayers. Mandating universal coverage would promise eventual elimination of the GPO and the WEP but at potentially significant cost to affected state and local governments, and even so the GPO and the WEP would continue to apply for some years to come, unless they were repealed. Whatever the decision, it will be important to administer the program effectively and equitably. The GPO and the WEP have proven difficult to administer because they depend on complete and accurate reporting of government pension income, which is not currently achieved. The resulting disparity in the application of these two provisions is yet another source of unfairness in the final outcome. We therefore take this opportunity to bring the matter back to your attention for further consideration. To facilitate complete and accurate reporting of government pension income, the Congress should consider giving IRS the authority to collect this information, which could perhaps be accomplished through a simple modification to a single form. Mr. Chairman, this concludes my statement, I would be happy to respond to any questions you or other members of the subcommittee may have. For information regarding this testimony, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues, on (202) 512-7215. Individuals who made key contributions to this testimony include Daniel Bertoni, Ken Stockbridge, and Michael Collins. Social Security Reform: Answers to Key Questions. GAO-05-193SP. Washington, D.C.: May 2005. Social Security: Issues Relating to Noncoverage of Public Employees. GAO-03-710T. Washington, D.C.: May 1, 2003. Social Security: Congress Should Consider Revising the Government Pension Offset “Loophole.” GAO-03-498T. Washington, D.C.: Feb. 27, 2003. Social Security Administration: Revision to the Government Pension Offset Exemption Should Be Considered. GAO-02-950. Washington, D.C.: Aug. 15, 2002. Social Security Reform: Experience of the Alternate Plans in Texas. GAO/HEHS-99-31, Washington, D.C.: Feb. 26, 1999. Social Security: Implications of Extending Mandatory Coverage to State and Local Employees. GAO/HEHS-98-196. Washington, D.C.: Aug. 18, 1998. Social Security: Better Payment Controls for Benefit Reduction Provisions Could Save Millions. GAO/HEHS-98-76. Washington, D.C.: April 30, 1998. Federal Workforce: Effects of Public Pension Offset on Social Security Benefits of Federal Retirees. GAO/GGD-88-73. Washington, D.C.: April 27, 1988. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Social Security covers about 96 percent of all U.S. workers; the vast majority of the rest are state, local, and federal government employees. While these noncovered workers do not pay Social Security taxes on their government earnings, they may still be eligible for Social Security benefits. This poses difficult issues of fairness, and Social Security has provisions that attempt to address those issues, but critics contend these provisions are themselves often unfair. The Subcommittee asked GAO to discuss Social Security's effects on public employees as well as the implications of reform proposals. Social Security's provisions regarding public employees are rooted in the fact that about one-fourth of them do not pay Social Security taxes on the earnings from their government jobs, for various historical reasons. Even though noncovered employees may have many years of earnings on which they do not pay Social Security taxes, they can still be eligible for Social Security benefits based on their spouses' or their own earnings in covered employment. To address the issues that arise with noncovered public employees, Social Security has two provisions--the Government Pension Offset (GPO), which affects spouse and survivor benefits, and the Windfall Elimination Provision (WEP), which affects retired worker benefits. Both provisions reduce Social Security benefits for those who receive noncovered pension benefits. Both provisions also depend on having complete and accurate information on receipt of such noncovered pension benefits. However, such information is not available for many state and local pension plans, even though it is for federal pension benefits. As a result, the GPO and the WEP are not applied consistently for all noncovered pension recipients. In addition to the administrative challenges, these provisions are viewed by some as confusing and unfair. In recent years, various Social Security reform proposals that would affect public employees have been offered. Some proposals specifically address the GPO and the WEP and would either revise or eliminate them. Such actions, while they may reduce confusion among affected workers, would increase the long-range Social Security trust fund deficit and could create fairness issues for workers who have contributed to Social Security throughout their working lifetimes. Other proposals would make coverage mandatory for all state and local government employees. According to Social Security actuaries, mandatory coverage would reduce the 75-year actuarial deficit by 11 percent. It could also enhance inflation protection, pension portability, and dependent benefits for the affected beneficiaries, in many cases. However, to maintain the same level of spending for retirement, mandating coverage would increase costs for the state and local governments that sponsor the plans, and would likely reduce some pension benefits. Moreover, the GPO and the WEP would still be needed for many years to come even though they would become obsolete in the long run.
Since its founding in 1718, the city of New Orleans and its surrounding areas have been subject to numerous floods from the Mississippi River and hurricanes. The greater New Orleans metropolitan area, composed of Orleans, Jefferson, St. Charles, St. Bernard, and St. Tammany parishes, sits in the tidal lowlands of Lake Pontchartrain and is bordered generally on its southern side by the Mississippi River and the Gulf of Mexico. Lake Pontchartrain is a tidal basin about 640 square miles in area that connects with the Gulf of Mexico through Lake Borgne and the Mississippi Sound. Many hurricanes have struck the area over the years, including Hurricane Betsy in 1965, Hurricane Camille in 1969, Hurricane Lili in 2002, and Hurricane Katrina in 2005. The hurricane surge that can inundate coastal lowlands is the most destructive characteristic of hurricanes and accounts for most of the lives lost from hurricanes. Because of such threats, a series of flood control structures, including concrete floodwalls and levees, have been constructed in and around the New Orleans metropolitan area (see fig. 1). On August 29, 2005, Hurricane Katrina came ashore near Buras, Louisiana, about 60 miles southeast of New Orleans, with wind speeds of up to 127 miles per hour and a storm driven wave surge of up to 30 feet. The size and strength of the storm and subsequent flooding resulted in one of the largest natural disasters in U.S. history. Storm waters overtopped floodwalls and levees in Louisiana’s Orleans and neighboring parishes, causing widespread flooding, many billions of dollars of property damage, and more than 1,300 deaths. The Corps estimates that more than one-half of the 269 miles of federally constructed levees and floodwalls in these parishes were damaged by the storm’s winds and floodwaters. Through a combination of permanent and temporary measures, the Corps planned to restore the level of hurricane protection to the New Orleans area that existed prior to Hurricane Katrina by June 1, 2006. To restore the pre-Katrina level of protection in a period of about 9 months, the Corps had to work quickly and, in some instances, engineer temporary solutions because not all of the repairs could be completed in time. One such temporary solution was needed along the Orleans East Bank, located south of Lake Pontchartrain, from the 17th Street Canal to the Inner Harbor Navigation Canal, and along the western bank of the Inner Harbor Navigation Canal to the Mississippi River. About 19 miles of levees and floodwalls are located along the Orleans Lakefront, the Inner Harbor Navigation Canal, and three drainage canals—17th Street, London Avenue, and Orleans Avenue—which drain rainwater from New Orleans into Lake Pontchartrain. A total of about 1 mile of levees and floodwalls was damaged along the 17th Street Canal and two sides of the London Avenue Canal, resulting in flooding of New Orleans (see fig. 2). The city’s three drainage canals are critical to avoid flooding in New Orleans from a rain storm. During rain events, the city’s Sewerage and Water Board pumps rainwater from the city into three drainage canals at 17th Street, London Avenue, and Orleans Avenue, which then flows unrestricted into Lake Pontchartrain. According to the Corps, the maximum amount of water that the Sewerage and Water Board can pump into these drainage canals is 10,500 cubic feet per second (cfs) at the 17th Street Canal, 7,980 cfs at the London Avenue Canal, and 2,690 cfs at the Orleans Avenue Canal. Because permanent structures and repairs could not be completed on the three drainage canals by June 1, 2006, the Corps decided to install temporary pumping systems to provide protection to the area for 3 to 5 years until permanent structures can be constructed (see fig. 3). The Corps chose to install three gates and temporary pumping systems near the points where the 17th Street, London Avenue, and Orleans Avenue drainage canals meet Lake Pontchartrain. These gates are intended to stop hurricane-induced storm surge from Lake Pontchartrain from entering the canals and possibly overtopping or breaching the canal floodwalls, which would flood the city. However, because the gates prevent the drainage canals from draining water from the city into the lake when the gates are closed during a hurricane event, temporary pumping systems are needed to pump water out of the canals and into the lake. Due to the hurricane damage sustained by the floodwalls bordering the canals, the Corps established the following safe water levels for each of the drainage canals—6 feet for the 17th Street Canal, 5 feet for the London Avenue Canal, and 8 feet for the Orleans Avenue Canal. The water level in each of these canals must be maintained at or below the safe water level in order to ensure that the already weakened canal floodwalls are not breached. Further, the total capacity of the temporary pumping systems at the interim gated closure structures that is necessary to accommodate a 10-year rainfall event without exceeding the safe water levels is 7,700 cfs at the 17th Street Canal, 5,000 cfs at the London Avenue Canal, and 1,900 cfs at the Orleans Avenue Canal. The hydraulic pumping systems installed by the Corps at the Orleans Avenue Canal were sufficient to maintain the safe water levels during a 10-year rainfall event. However, the hydraulic pumping systems installed at the 17th Street and London Avenue drainage canals could provide about 4,000 cfs and 2,700 cfs, respectively. In order to ensure that each pumping station had the needed capacity to pump enough water during a 10-year rainfall event, the Corps used a separate contract to acquire and install an additional 11 direct drive pumps and 14 portable hydraulic pumps at the 17th Street Canal, increasing the capacity from about 4,000 cfs to about 9,200 cfs. The Corps also installed 8 additional direct drive pumps at the London Avenue Canal, increasing the capacity from about 2,700 cfs to about 5,200 cfs. Table 1 provides the total number of pumps and pumping capacity at the 17th Street, London Avenue, and Orleans Avenue Outfall Canals. Although these additional pumps allow the total pumping capacity at the three canals to maintain the capacity needed to pump water out of the canals during a 10-year rainfall event, the capacity is still not sufficient to match the maximum pumping capacity of the Sewerage and Water Board’s pumps. As a result, during a hurricane event, some flooding might occur in some parts of the city from rainfall, although it is likely that this flooding would be significantly less than that which occurred from the overtopping and breaches of the canal walls during Hurricane Katrina. Appendix II provides the pumping capacity trends for the 17th Street, London Avenue, and Orleans Avenue drainage canals from June 1, 2006, through November 30, 2007. The Corps’ efforts to develop the specifications for the pumping systems were driven by its commitment to have as much pumping capacity as possible in place at the drainage canals by June 1, 2006—the start of the first Atlantic hurricane season after Hurricane Katrina. Due to the compressed schedule and the limited space available for installation, and based on the limited market research conducted by the Corps’ consultants, the Corps decided to use 60-inch hydraulic pumping systems rather than alternatives that would have involved longer delivery schedules or required more space. The Corps’ consultants drafted contract specifications that closely matched those of one supplier, which, along with the 60-inch pumping system requirement, resulted in that supplier being in the strongest position to compete for the contract. Further, the contract itself was not written as precisely as it should have been. Specifically, the original factory test requirements were ambiguous, there were limited provisions for on-site testing, and there were no criteria for acceptance of the pumping systems by the government. The decisions made by the Corps during the procurement of pumping systems at three New Orleans drainage canals were driven largely by space and schedule considerations. The Corps began the acquisition process by contracting with two architectural and engineering consultant firms (consultants) to determine available technical options that could meet the Corps’ schedule, space, and pumping capacity needs; conduct the associated market research; and survey pump equipment suppliers. On the basis of their technical analysis, the consultants concluded that the use of hydraulic-driven pumps was the best alternative for the Corps because electric-driven direct drive pumps would need auxiliary equipment that would require more space for installation and would have a longer delivery time. They also determined that using hydraulic pumps less than 60 inches in diameter would require more pumps to be installed and require added space to provide the same amount of pumping capacity. Corps consultants drafted contract specifications that closely matched those of one supplier. The consultants conducted limited market research and found that at least two suppliers had specifications for a 60-inch hydraulic pump. One of those suppliers was MWI, a company that the consultants had spoken with as they were developing the design for the gates and pump stations along the drainage canals. The consultants met with MWI and also contacted at least two other pump manufacturers regarding their pumps. Of these suppliers, the consultants identified MWI as the only supplier who had actually manufactured a 60-inch hydraulic pump with a 60-inch impeller, the mechanism that drives water through the system. Another pump manufacturer had a design for a 60-inch pump, but it included only a 54-inch impeller. The consultants believed that MWI could deliver the 34 60-inch pumping systems that the Corps needed on schedule. The Corps did not have an existing technical specification for a 60-inch hydraulic pump. The consultants drafted a specification for the Request for Proposals (RFP) based on technical specifications and descriptions of the pumps contained in catalogs published by MWI and another manufacturer. The consultants told us that they had provided the Corps with a generic specification because any reference to a specific supplier had been removed. However, our analysis of the RFP’s equipment specifications indicates that they more closely matched MWI’s than the other manufacturer’s catalog descriptions. In fact, the testing specifications used for the RFP were nearly identical to those published by MWI, which included an open sump test requirement. After the other manufacturer complained that the open sump test requirement was restrictive because only MWI had an open sump, the Corps amended the RFP to delete this requirement. This open sump test requirement was incorporated into the contract at the time of award, however, because it was offered by MWI as part of its proposal. Other contractual testing and acceptance criteria were ambiguous, inadequate, or missing altogether. Specifically, the contract did not clearly state whether factory flow and head testing was required of each pump, the on-site testing requirement merely stated that there should be no leaks, and there were no final acceptance criteria in the contract. Terms and conditions in contracts should be clear and complete so that the parties fully understand their obligations and that potential disputes can be avoided. To date, Corps and MWI have been able to address identified deficiencies in the contract, which were largely caused by the perceived need to move forward expeditiously. However, the extent to which these or other contract issues may lead to disputes between the parties will not be known until the time of contract closeout, currently scheduled for early 2008. Given the need to procure and install the temporary pumping systems before the June 1 start of the 2006 hurricane season, the Corps decided to use a streamlined process to contract for the pumping systems. Like most other federal agencies, the Corps has statutory authority to use other than full and open competition procedures when the agency’s needs are of an unusual and compelling urgency. Using this authority, the Corps streamlined parts of the acquisition process. The RFP was issued on January 13, 2006, and required that the contractors’ proposals be submitted by January 18, just 5 days later. Normally the solicitation would allow for a response period of at least 30 days. The Corps received three proposals in response to its RFP. Suppliers submitted pricing information and technical proposals and made oral presentations to the Corps. The Source Selection Evaluation Board, whose voting members consisted of three Corps officials, evaluated offers using four technical evaluation factors identified in the RFP in descending order of importance: (1) technical approach, (2) project management, (3) past performance, and (4) small business or small disadvantaged business participation. The solicitation also provided that, when combined, these technical evaluation factors were weighted approximately equal to price. The Source Selection Evaluation Board rated MWI’s proposal significantly higher than the other two proposals. MWI’s proposal included commitments from suppliers and subcontractors to deliver the pump components needed by MWI to assemble the pumps. The Corps believed MWI represented the best chance of meeting the Corps’ critical deadline of June 1, 2006. MWI offered a price of $26.6 million, which was within 2.8 percent of the government estimate of $25.6 million. The contracting officer determined that MWI’s price was fair and reasonable and awarded a firm, fixed-price contract to MWI on January 27, 2006. The contract also contained an incentive of up to $5 million that MWI could earn for early delivery. To date, the Corps has increased the contract price by about $6 million for required pump modifications and for six additional pumping systems, bringing the total number of hydraulic pumping systems acquired to 40. Figure 4 shows a diagram of the hydraulic pumping system. The Corps and its contractors have addressed and corrected the pumping system testing and performance issues identified by both our May 2007 report and the ITR. Factory testing, which occurred from March 2006 through May 2006, revealed several issues with some components of the pumping systems, and concerns were raised that the pumping systems would not perform as intended. On May 2, 2006, the Corps modified the original contract, replacing the original testing requirements with new procedures because of schedule and performance concerns. Beginning in June 2006, however, even though all of the problems identified during factory testing had not been resolved, the systems were installed as planned because the Corps believed it was better to have some pumping capacity along the drainage canals during the 2006 hurricane season rather than none. The Corps also thought that most of the issues identified during factory testing could be resolved after installation. The Corps and the contractors took several steps to correct the known performance issues after installation and, as of September 2007, all of the pumping systems have been reinstalled and all of the outstanding repairs have been completed. According to Corps officials, the results of on-site testing now show that the 40 hydraulic pumping systems are fully operational and final acceptance of the pumping systems is scheduled for the beginning of calendar year 2008. On May 2, 2006, the Corps issued modification No. 4, “revised test procedures,” to the contract. According to the contracting officer responsible for oversight of the pumping system contract, these revised testing procedures replaced the original factory testing requirements with new testing requirements. The contract initially required each pump and hydraulic power transmission system to be factory pressure tested statically and dynamically. In addition, full-size flow and head testing was to be witnessed by the government prior to shipment of the pumping systems. The pump flow and head testing was to be conducted in an open sump at the manufacturer’s testing facility in accordance with Hydraulic Institute (HI) standards and in the presence of a registered professional engineer. According to the contracting officer, modification No. 4 replaced these testing procedures. The modification required, among other things, testing the hydraulic drive units for a minimum of 3 hours and utilizing previous model tests of the pump design to predict the pump capacity. Further, the modification required all pumps to be pressure tested for 90 minutes. According to New Orleans District Corps officials responsible for oversight of the contract, the original testing requirements were interpreted by a Corps inspector and the ITR to include full-size flow and head testing of each of the pumps. Furthermore, the ITR concluded that modification No. 4 did not specifically delete the original testing requirements and, therefore, assumed the testing that had been conducted did not meet the contract requirements regarding full-size flow and head testing for each pump in accordance with HI standards. Based on this assumption, the ITR concluded that the contractor owed the government a refund because it had not completed the testing required in the contract. The ITR’s reading of the modification may have overlooked the modification’s purpose, however, which was to adjust the required testing to focus on those elements of the pumps in need of further refinement, given the limited time available. We believe the ITR may have reached this conclusion because it did not discuss the intent of the “revised test procedures” modification to the original testing requirements with either the contracting officer or the Corps’ technical officials. Officials from the New Orleans District told us that it was never the intention of the Corps to interpret the testing requirements as requiring every pump to be full-size flow and head tested in accordance with HI standards–only static and dynamic tests were originally required of each pump. Corps technical and contracting officials said the revised testing procedures contained in the modification were developed to focus on the mechanical issues that had been identified and, therefore, replaced the original testing requirements, which included allowing the use of model test results in lieu of HI tests. The Corps and the contractors have addressed and corrected the concerns raised about some components of the pumping systems during factory and on-site testing. As noted in our May 2007 report and in the ITR, the primary concerns identified during testing included undersized gear oil circulation motors, hydraulic motor vibrations, the design of the hydraulic intake line, suspect pipe welds, and lower than expected pumping capacity. As a result of the concerns identified during testing, the Corps had no assurance that the pumping systems would operate to capacity if needed during the 2006 hurricane season. Nevertheless, the pumping systems were installed as planned because the Corps believed it was better to have some pumping capacity along the drainage canals during the 2006 hurricane season rather than none. On June 1, 2006, the Corps had installed 11 pumping systems, and by July 2006, it had installed 34, although it is uncertain how much of the theoretical capacity of these pumping systems would have been available, and for how long, if needed during the 2006 hurricane season. The Corps also believed that many of the issues identified during factory testing could be resolved after installation. After installation, the Corps and its contractors took several steps to correct known performance issues with the pumping systems. The main performance issues, and the ways the Corps and the contractors addressed each of them, are described in more detail below. During factory testing, the Corps observed that the gear oil circulation pump motors were overheating, which resulted in the failure of some of the motors. MWI determined that the pump motors were too small. All of the motors were eventually replaced with larger gear oil circulation motors, resolving the problem. During on-site testing in August 2006, the hydraulic motors were experiencing greater than normal vibrations. According to the ITR, this condition could have led to the failure of the equipment. Initial analysis of the problem indicated that there may have been a hydraulic short in the Rineer hydraulic motor that drives the main pump impellers. The motor manufacturer made modifications to the motor, and preliminary testing of the motors in late August 2006 appeared to confirm that these modifications eliminated the vibrations. However, upon further testing, vibrations were still present to varying degrees. Additional on-site testing was performed, and in late November 2006, it was determined that the vibrations were due to undersized springs in the Rineer hydraulic motors. The motor manufacturer replaced the undersized springs with heavier springs. According to Corps officials, on-site tests witnessed by the government after the installation of the new springs and measurements conducted by a third-party contractor document that the pumping systems now operate with no apparent vibration issues. Because of concerns that the hydraulic intake lines could adversely affect pumping performance, the Corps requested that MWI redesign and reinstall the hydraulic intake lines on all of the pumping systems. During factory testing, the Corps observed a high rate of failure of the Denison hydraulic pumps on the drive units. The Denison motors pump the hydraulic fluid from a reservoir to the Rineer motor which then turns the pump impeller. A preliminary assessment revealed that the majority of the issues identified in the factory were caused by air entrainment (or dry run condition) in the hydraulic pumps. The dry run condition was attributed to air getting into the hydraulic system upon initial start-up of the drive unit. To eliminate the dry run issue, two interim changes were made to the system until a more permanent fix could be implemented: (1) a check valve was installed on all of the hydraulic intake lines, and (2) the pump start-up procedure was modified so that the system was started at a lower speed and gradually increased to the normal operating speed. The ITR concluded that the pumping systems would probably not have performed as designed because the inclusion of a check valve would require priming the pump prior to start-up and the original intent of the design was to allow for unmanned operation of the equipment. Both Corps and MWI officials stated that the ITR was incorrect in assuming that the pumps would have to be primed using the check valve at every start-up. Instead, these officials stated that the pumping systems would have operated as intended because using this valve to prime the Denison pumps is only necessary immediately after maintenance is performed on the system. Additionally, according to a Lake Borgne Levee District official, this pump design has been successfully used for about 20 years without having to prime the pumps prior to start-up. Nevertheless, in order to ensure that air would not be pulled into the hydraulic pumps, causing failure of the system, the Corps requested that MWI redesign the hydraulic intake system to provide for a flooded suction without a siphon. Figure 5 shows a drawing of the original hydraulic pump design with siphon. Corps officials from the New Orleans District emphasized to us that the redesign was requested to more adequately meet their needs, not because of concerns about the pumping systems operating as intended. MWI subsequently agreed to modify the design of the hydraulic intake line at the request of the Corps. According to Corps officials, by the end of July 2007 and at its own expense, MWI had redesigned and reinstalled the new flooded suction design on all 40 pumping systems (see fig. 6). Because of questionable welds identified on the pump housing, the Corps decided to replace certain welds to ensure they would not fail during pump operations. Upon inspection of the pump housing, the Corps determined that some of the welds on the pump housing may not be sufficient. While MWI provided the Corps with a “fit for service letter” for all of the welds on the pump housing and an extended warranty, the Corps decided that it was prudent to replace the welds on the pump housing below the base plate (the segment of the pump that is below the water level) in order to ensure that the welds would not fail during pumping operations. All of the necessary welds have been corrected, and the Corps plans to negotiate this additional cost during contract closeout. Additionally, issues were raised about the adequacy of the welds on the hydraulic piping, which carries high pressure hydraulic fluid from the Denison pump to the Rineer motor. The hydraulic piping was subsequently visually inspected and pressure tested to 1.5 times its operating pressure as part of the quality control process. The testing results indicated that the piping was adequate for transmitting power from the diesel engine to the water pump. Initial pumping capacity testing indicated that the pumping systems may not have been performing at the design capacity level. In April 2006, MWI conducted full-size factory flow and head tests on the hydraulic pumps. A representative from the Corps’ Engineering Research and Development Center (ERDC) reviewed these test results and concluded that the test results showed that the pumps would operate at about 96 percent of the specified capacity. However, according to the ITR, these tests were not conducted in accordance with HI standards and, therefore, were invalid. In August 2006, on-site flow and head testing was conducted at the canals. In order to test the pumping systems, the interim gates were closed and water was pumped into the canal by the city’s Sewerage and Water Board to raise the water level in the canal to the elevation necessary for the pumping systems to be tested. However, because adequate water levels in the canal could not be achieved to replicate design conditions, the pumps could not reach a fully primed condition. The Corps decided to invert the discharge pipes in order to enable the pumps to reach a primed condition with less than design water conditions in the lake. This facilitated testing of the pumping systems and allowed measurements to be recorded and analyzed. In September 2006, a representative from ERDC was consulted and performed on-site flow and head tests of pumping systems at the London Avenue canal. A month later the Corps and ERDC performed the same tests at the 17th Street canal. Data collected from these on-site tests revealed that the pumping systems were working near the appropriate capacity. Based upon the on-site testing results and upon suggestion from the ERDC representative, all of the discharge pipes at all of the canals were inverted and cut at a 30 degree angle, which allows the pumps to prime at lower canal water elevations and enhances the flow rates (see fig. 7). In November 2006, another full-size factory flow and head test was conducted by MWI and ERDC. However, due to constraints at the testing facility, the full-size factory test, which was done in consultation with the ERDC representative, was completed with deviations from the HI standards. This test revealed that the pumping capacity ranged from 93.6 to 97.6 percent of the design specification and performed without problems during the 2 days of testing. According to Corps officials, MWI further agreed to construct a model test to confirm the pumping systems would perform within HI standards. In September 2007, a Corps ERDC official witnessed a model test conducted by MWI and prepared a report, which concluded that the pumping systems would operate at 98.6 percent of the design capacity. According to Corps officials, these results are within acceptable limits and any issues remaining with the final pumping capacity will be negotiated at contract closeout. According to Corps officials, the Corps plans to make final acceptance of the pumping systems during the beginning of calendar year 2008. The original pumping system contract lacked clearly defined on-site testing procedures, requiring only that the pumps and hydraulic equipment be tested for leaks. In light of the various issues surrounding the pumping systems, the Corps and MWI agreed that it was necessary to show that all of the pumping systems could operate at a steady state after installation. According to Corps and MWI officials, a major challenge with on-site testing of the pumping systems is simulating the amount of water that would be present in the canals and the lake during a storm event. Under normal conditions, when there are low water levels in the canals, it is not possible to test each pump system for an extended period of time, and any tests conducted cannot approach the design capacity of the pumping systems. Due to this limitation, the Corps subsequently developed specific pumping system acceptance testing procedures that, among other things, include running each pumping system continuously for 2 hours. Corps officials told us that because most of the issues associated with the pumping systems occurred within the first 45 minutes of operation, the 2- hour testing period for each pumping system was sufficient. In its June 2007 report, the ITR team concluded that at the time of their review in September 2006, the pumping systems would not perform as intended because of issues encountered in factory testing in early 2006. Since September 2006, there have been a number of analyses, changes, and additional testing of the pumping systems to address these earlier concerns. For example, between November 2006 and September 2007, the Corps had completed all of the repairs that were outstanding at the end of the 2006 hurricane season and which were noted in the ITR, and reinstalled all 40 pumping systems. In addition, as of September 2007, each pumping system had been successfully tested on site for at least 2 hours, providing greater assurance that they will perform as designed during future hurricane seasons. On September 27, 2007, GAO officials witnessed the pumping systems performing at both the 17th Street and London Avenue Canals (see fig. 8). According to Corps officials, because all of the outstanding repairs have been completed and on-site testing indicates that the system is now fully operational, final acceptance of the pumping systems and the contract closeout is expected to be completed early in calendar year 2008. Contract files for the pumping systems, although incomplete at the time of the ITR review, now contain the required documentation for the type of contract and the value of the associated modifications. In a number of cases, however, Corps officials inserted required documentation in the contract files several months after modifications were issued and only after the ITR reported its findings. While the ITR correctly noted the absence of some forms of required documentation, we found that much of the documentation specifically cited—including requests for proposals, independent government estimates, certified cost or pricing data, technical analyses, and price negotiation memorandums—was not required for the modifications in question. In addition, while the ITR found that it appeared as though the contractor developed the scope of work and pricing for some of the modifications without a subsequent analysis by the Corps, we found no instance of this occurring. Rather, our review found that, for most of the contract modifications there was extensive back and forth discussion, usually by e-mail, between officials from the Corps and MWI. The ITR team reviewed 18 of the first 30 contract modification files and reported that many lacked significant documentation. Specifically, the ITR identified 13 modification files with deficiencies—most pertaining to documentation of the Corps’ determination of fair and reasonable pricing. Our review confirmed that significant documentation was added to the files only after the ITR team issued its report. We reviewed the files for the 32 post-award modifications, focusing in depth on the files related to the 13 modifications found by the ITR team to contain deficiencies, as well as 2 additional modifications that were issued after our May 2007 report and the ITR review. Of the modifications we reviewed in depth, 10 contained internal memorandums, prepared by the contracting officer after the fact, to document price reasonableness or the events supporting the modification. Another 2 modifications contained undated memorandums of price reasonableness signed by the contracting officer. Finally, of the eight purchase request and commitment forms on file, five were prepared on the same date to retroactively document the availability of funds for modifications that were issued 9 to 17 months earlier. Documentation in some of the files, however, suggests that the availability of funds was determined through other means at the time the modifications were signed. In response to the ITR, the Corps’ contracting officer acknowledged that the contract files could have been better managed but stated the Corps felt it was more important to get the pumps installed in a timely manner. In order to do this, the Corps issued the modifications with the intention of settling all outstanding issues with the contractor before closing out the contract. The Corps agreed with the ITR, however, that certain documentation was missing and took corrective actions to complete the files. The contracting officer, whom the ITR team did not meet with for their review, noted that because many of the people working on the pumping systems procurement were rotating through the District Office, they may not have completed or submitted all of the necessary paperwork before leaving. Even though it is currently complete, preparing documentation months after an event occurs increases the likelihood that the documentation may contain inaccuracies or ambiguities, which make it difficult to resolve any disputes that may arise. As of October 2007, the contract modification files appeared up to date and consistent with Federal Acquisition Regulation (FAR) requirements. While the ITR correctly noted the absence of some forms of required documentation, much of the documentation specifically cited by the ITR— including requests for proposals, independent government estimates, certified cost or pricing data, technical analyses, and price negotiation memorandums—was not required for the modifications in question. In some respects, it appears the ITR treated the pumping systems contract as if it were for construction rather than supplies. Different documentation requirements apply to these types of contracts. Ten of the modifications we reviewed in-depth increased contract costs and, therefore, required documentation of fair and reasonable pricing. While independent government estimates are one technique that can be used to analyze price and are required for construction contracts, they are not specifically required for supply contracts, such as the contract for the pumping systems. Nonetheless, the Corps obtained—and included in the files after the ITR review—independent government estimates for six of the modifications. None of the 10 modifications with additional costs that we reviewed in- depth required the contractor to provide certified cost or pricing data. Specifically, we found that 7 of the modifications fell under the threshold requiring cost or pricing data. The contracting officer determined that cost or pricing data was not required for another modification because it combined separately priced changes from 2 previous modifications that were each below the threshold. Finally, for 2 modifications related to the purchase of six additional pumps, the contracting officer concluded that adequate price competition existed from the base contract and, therefore, additional pricing data was not required. At least some information on pricing provided by the contractor was included in the files for all 10 of the modifications that involved additional costs. According to the FAR, when contractor certified cost and pricing data are not required, price analysis shall be used to determine a fair and reasonable price. While the FAR provides numerous analysis techniques, including the use of independent government estimates, it does not require the use of any one method. For 8 of the modifications we reviewed, the Corps’ contracting officer documented price analysis and negotiations with the contractor through signed internal memorandums for the files, and for 2 modifications, used price negotiation memorandums. In addition, while not required, the Corps obtained internal technical analyses for 3 of the modifications we reviewed in depth to determine the reasonableness of MWI’s proposals. Table 2 summarizes GAO’s analysis of the ITR’s findings regarding missing documentation in the contract files. In addition to contract documentation issues, the ITR also reported that it appeared, in some circumstances, as though the contractor developed the scope of work and pricing for the modifications without a subsequent analysis by the Corps. We found no instance of this occurring. Rather, our review of the files indicate that, for most of the contract modifications, there was extensive back and forth discussion, usually by e-mail, between personnel from the Corps and MWI. These discussions focused on the causes of and solutions to technical issues, as well as the costs of corrective actions. While each of the modifications was unique, modification No. 2 is illustrative of many of the contract modifications we reviewed. Specifically, shortly after award of the contract, the Corps determined that it needed the capability to control the pumps from a remote location, since in the event of a hurricane the operator would be required to seek shelter in a control booth. The Chief of Engineering from the Corps prepared a request to modify the contract to require master pump control panels. The request contained detailed specifications of what was required and estimated that the additional cost would be $150,000. The contracting officer sent the request to MWI and asked for a cost proposal. MWI replied through an e-mail that repeated the specifications provided to it by the Corps and offered a price of $188,699. The Corps requested additional support for the price, and MWI responded with a copy of the quote it had received from its supplier, pricing for MWI’s markup, and the additional work MWI would perform. A Corps engineer reviewed this information and informed the contracting office that MWI’s proposed price was reasonable. The Corps issued a contract modification with the specifications it developed at the price quoted by MWI. As was the case for a number of modifications, there was no contemporaneous price reasonableness document signed by the contracting officer; rather, an undated “after the fact” memorandum concluding that MWI’s price for the modification was reasonable was added to the file. As of October 31, 2007, the Corps had paid the contractor about $30.5 million of the $33 million contract for the 40 hydraulic pumping systems and has plans for reconciling mistaken payments it made. The Corps made payments to the contractor only after receiving invoices from the contractor for delivered items and services. In most cases, the Corps only paid 80 percent of each invoice and held the other 20 percent as retained funds in order to ensure the contractor was not overpaid and that any performance issues were addressed. The ITR identified a few instances where the contractor had received payment more than once for the same item. Our review confirmed that this did occur. We found, however, that these payments were made in error by the Corps and did not indicate any improper behavior on the contractor’s part. Specifically, on December 6, 2006, the Corps received one invoice requesting payment for three drive units and three pumps valued at about $2.2 million because they were complete, and MWI believed that they could be delivered if the Corps wanted them at that time. On the same day, the Corps notified MWI that it could not pay for the pumps and drive units until they were actually delivered. MWI then e-mailed the Corps, requesting that they ignore the original invoice and stating that they would send new invoices for the drive units and pumps upon shipment. The Corps subsequently received three separate invoices, each requesting payment for one drive unit and one pump. However, the Corps paid all of the invoices, including the invoice that the contractor told them to ignore. As a result, the Corps paid twice for the same three pumps and three drive units. According to the Corps’ contracting officer, the duplicate payments will be corrected by deducting the balance from withheld funds and not paying some outstanding invoices. Our review found no other instances where duplicate payments were made to the contractor. We also found 14 instances where the contractor sent invoices to the Corps for work completed, which have not been paid. The net effect is that the contractor has not been overpaid under the contract. On June 8, 2007, the Corps sent a letter to MWI providing an explanation as to why the Corps had not paid these outstanding invoices, and describing how the Corps planned to reconcile the duplicate payments made in January and February 2007 by subtracting the amount of the outstanding invoices from any additional invoices it received. From July through October 2007, the Corps made four additional payments to the contractor from the payments it had withheld, totaling about $1.8 million. The Corps has still not made final payment for the outstanding amount remaining on the contract. In addition, the Corps has withheld payments related to an early delivery incentive of approximately $5 million until the final acceptance of the pumping systems. According to Corps officials, the final payment and reconciliation of the contract, including any incentive payments or penalties, will be settled with the contractor after final acceptance of the pumping systems. The Corps expects this to take place in the early part of calendar year 2008. The Corps’ actions in awarding and administering the pumping system contract were generally in accordance with federal requirements. However, in its haste to award the contract and acquire and install the pumps, the Corps did not develop a contract that was clear and precise with respect to testing and acceptance criteria and did not always promptly prepare required contract related documents. In some cases, this has led to uncertainties about exactly what was required of the contractor to comply with the contract’s terms and conditions. This also creates the potential for contract disputes, which can be difficult, expensive, and time- consuming to resolve. In addition, in those cases where required documents were prepared “after the fact,” there is an increased likelihood that documents prepared months after events have occurred may contain inaccuracies as memories have faded and key personnel may have moved on to other positions. While we recognize that this procurement was conducted under exigent circumstances, we believe that the procedures used by the Corps could be improved for future procurements. For this reason we recommend that the Secretary of Defense direct the Commanding General and Chief of Engineers of the U.S. Army Corps of Engineers to: take steps, through additional guidance or otherwise, to reinforce the importance of adherence to sound acquisition practices, even during expedited procurements, including ensuring that important contract provisions, such as any required testing, are clear so that the contractor and the government understand what conditions or criteria must be met for successful completion of the contract; and develop procedures to ensure that any required contract-related documentation, including that related to contract pricing, is completed and filed within a reasonable period of time. The Department of Defense provided written comments on a draft of this report, which are reprinted in appendix III. The Department of Defense concurred with our recommendations and provided information on what actions it would take to address them. Concerning our recommendation to adhere to sound acquisition practices, the Department of Defense said the Secretary of Defense will direct the Corps to send guidance to all Corps offices emphasizing the need for clearer technical specifications so that the contractor and government understand what conditions or criteria must be met for successful contract completion. To address our recommendation to ensure more timely completion of required contract file documentation, the Department of Defense said the Secretary of Defense will direct the Corps to review and revise as necessary current policies and regulations. The Department of Defense also provided us with technical comments, which we have incorporated throughout the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees; the Secretary of Defense; and the Commanding General and Chief of Engineers of the U.S. Army Corps of Engineers. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or any of your staff have any questions about this report, please contact one of us at (202) 512-3841 or mittala@gao.gov, (202) 512-4841 or woodsw@gao.gov, or (202) 512-6923 or dornt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To evaluate efforts by the U.S. Army Corps of Engineers (Corps) to solicit, award, and administer the pumping system contract, we reviewed the Corps’ plans for the interim gates and temporary pumping systems consisting of the 40 hydraulic pumps installed at the three New Orleans drainage canals. We also reviewed applicable Federal Acquisition Regulation criteria, especially pertaining to contract pricing; the contract and specifications; e-mails, correspondence, and other supporting documentation related to the solicitation and award of the contract; factory and on-site test results; performance requirements; the 32 contract modifications and supporting documentation; the Mississippi Valley Division (MVD) Independent Team Report (ITR); the Corps project delivery team’s response to the ITR; contractor invoices and payment records; and the Corps’ plans for increasing pumping capacity through 2007. We also visited the 17th Street, London Avenue, and Orleans Avenue Canals and observed the operation of the pumping systems. We interviewed contracting and program officials from (1) Corps Headquarters, New Orleans District, other Corps districts, and members of the MVD’s technical review team related to the contract and pump performance; (2) Moving Water Industries Corporation and two other pump suppliers that bid on the solicitation; and (3) the architectural and engineering consulting firms under contract with the Corps that researched available pumping system alternatives, including qualified pump manufactures, pump delivery timelines, and costs, and that helped design the canal gates and pumping stations. We conducted our work from September through December 2007 in accordance with generally accepted government auditing standards. Appendix II: Pumping Drainage Canals Cubic feet per second (cfs) In addition to the contacts named above, Ed Zadjura, Assistant Director; Matthew Reinhart; Katherine Trimble; Christine Frye; James Dishmon; Rich Johnson; Marie P. Ahearn; and Kenneth E. Patton made significant contributions to this report.
Hurricane Katrina caused several breaches in the floodwalls along three drainage canals in New Orleans, contributing to catastrophic flooding. To restore the pre-Katrina level of hurricane-related flood protection, the Army Corps of Engineers (Corps) decided to acquire several large-capacity pumping systems. During the process of acquiring, testing, and installing the pumping systems, issues with the pump contract and operation of the pumping systems came to light, including several identified in a Corps Independent Team Report (ITR). GAO was asked to evaluate the Corps' efforts to (1) develop contract specifications and award the contract, (2) address pumping system performance issues, (3) document contract modifications, and (4) reconcile contract payments. GAO reviewed contract and testing documents, observed the operation of the pumping system, and interviewed officials from the Corps, its consultants and contractors, and the ITR team. Schedule concerns drove the Corps' decisions in developing specifications for the pumping systems and awarding the contract, but the rush to award the contract resulted in deficiencies in key contract provisions. Specifically, the original factory test requirements were ambiguous, there were only limited provisions for on-site testing, and there were no criteria for acceptance of the pumping systems by the government. The Corps conducted an expedited competition to contract for the pumping systems and selected a supplier for contract award based largely on its ability to deliver the pumping systems by the June 1 start of the 2006 Atlantic hurricane season. The Corps and the contractors have addressed and corrected known performance issues with the pumping systems. Concerns included hydraulic motor vibrations, the design of the hydraulic intake line, suspect pipe welds, and lower than expected pumping capacity. The pumping systems were installed prior to correcting these issues because the Corps believed it was better to have some pumping capacity along the drainage canals during the 2006 hurricane season rather than none, despite uncertainty over how much of the pumping system capacity would be available, and for how long, if needed. Between November 2006 and September 2007, the Corps and the contractors completed all of the repairs and reinstalled the pumping systems. Documents that GAO reviewed indicate that, as of September 2007, each pumping system had been successfully tested on site for at least 2 hours, thus providing greater assurance that they will perform as designed. The contract files for the pumping systems contained the required documentation for the type and value of the contract and associated modifications, though, in a number of cases, documentation was inserted in the contract files several months after modifications were issued and only after the ITR reported its findings. While the ITR correctly noted the absence of some required documentation, GAO found that much of the specific documentation cited as missing was not required for the modifications in question because of the nature and value of these modifications. In addition, while the ITR found that it appeared as though the contractor developed the scope of work and pricing for some of the modifications without a subsequent analysis by the Corps, GAO found no instance of this occurring. As of October 31, 2007, the Corps had paid the contractor about $30.5 million of the $33 million contract amount. In a few instances, the Corps made duplicate payments to the contractor. GAO found that these payments were due to Corps mistakes, not inappropriate billing by the contractor. GAO found no other cases of duplicate payments. The Corps plans to adjust for the duplicate payments by deducting the balance from remaining funds, including any incentive payments, owed to the contractor. According to Corps officials, final payment and reconciliation of the contract is expected by early 2008; however, it is unknown to what extent contract or pump performance issues will affect the final amount paid for the contract during the close-out process.
There are no legal definitions for a virtual economy or currency, but generally, a virtual economy is comprised by the economic activities among a community of entities that interact within a virtual setting, such as an online, multi-user game. Virtual economies can be closed, meaning the economic activities and units of exchange used within the community do not interact with the real economy outside of the virtual environment setting, or they can be open, with some economic activity occurring in both the virtual setting and the real economy. A virtual currency is, generally, a digital unit of exchange that is not backed by a government- issued legal tender. Virtual currencies can be used entirely within a virtual economy, or can be used in lieu of a government-issued currency to purchase goods and services in the real economy. Some virtual economies may function similarly to barter exchanges, where bartering is the exchange of goods or services in lieu of monetary payments. For example, a carpenter may build a desk for a dentist in exchange for dental work. Barter transactions are taxable transactions, and taxpayers must report the fair market value of the good or service received on their tax returns. Some of the variations in virtual currencies and their interaction with the real economy are shown in figure 1. 2The 2007 Taxpayer Assistance Blueprint—a 5-year strategic plan for improving service to taxpayers—is a collaborative effort of IRS, the National Taxpayer Advocate, and the IRS Oversight Board. Congress has received annual update reports on the implementation of the blueprint. 3The term “barter exchange” means any organization of members providing property or services who jointly contract to trade or barter such property or services. 26 U.S.C. § 6045. halve every time the network reaches 210,000 blocks, or approximately every four years. From inception through November 2012, rewards were 50 bitcoins. In 2016, rewards are expected to halve again to 12.5 bitcoins. from the bitcoin network, but are providing some figures to provide context for the possible size of these markets. 5Given these limitations, we did not test the reliability of data, such as the data generated 6http://blockchain.info. (Date accessed May 1, 2013.) 7Due to data limitations, it is difficult to calculate the velocity, or the rate at which bitcoins 8https://mtgox.com. (Date accessed May 1, 2013.) https://mtgox.com operates the largest are spent, and the number of transactions between unique users in a given time period. bitcoin exchange. The site has daily and monthly limits on how many bitcoins may be exchanged back to U.S. dollars or other virtual or government-issued currencies. These limits may be raised if users provide additional documentation confirming their identity. According to Linden Lab, creators of Second Life, residents exchanged more than US$150 million worth of Linden dollars within Second Life’s economy in the third quarter of 2010. IRS is responsible for ensuring taxpayer compliance for all economic areas, including virtual economies and currencies. One mechanism that assists IRS in enforcing tax laws is information reporting, through which third parties report to IRS and taxpayers on certain taxpayer transactions. For example, subject to certain thresholds, third-party settlement organizations are required to report on Form 1099-K payments in settlement of third-party network transactions. A common example of a third-party settlement organization is an online auction-payment facilitator, which operates as an intermediary between buyers and sellers by transferring funds between their accounts in settlement of purchases. Another type of third-party information reporting is performed by barter exchanges, which, generally, are organizations that facilitate barter transactions among exchange members. Such barter exchanges are required to report on Form 1099-B each member’s barter transactions proceeds. Third-party information reporting is widely acknowledged to increase voluntary tax compliance, in part because taxpayers know that IRS is aware of their income. Likewise, in its role in administering the tax code, IRS must implement the laws Congress enacts through detailed guidance. To accomplish this responsibility, IRS publishes several forms of guidance, such as regulations, revenue rulings and procedures, and notices. IRS also provides more informal guidance on its website based on factors such as perceived need, media coverage, or IRS staff identifying an emerging tax compliance issue. As outlined in IRS’s Taxpayer Assistance Blueprint and related reports, a key part of IRS’s strategy for preventing and minimizing noncompliance is to outreach to taxpayers to help them understand and meet their tax responsibilities. One of the guiding principles of this approach is to enhance IRS’s website so that it becomes the first choice of taxpayers for obtaining the information they need to comply. 9Third-party settlement organizations must file Form 1099-K if gross payments to a payee exceed $20,000 and there are more than 200 transactions with the payee in a given tax year. 10For federal tax purposes, all income is taxable, although the tax code excludes some items from income, such as gifts or inheritances, subject to exceptions, while it allows other items to be deducted to reduce taxable income, subject to limitations and restrictions, such as trade or business expenses. have no value outside the game and David cannot exchange his online money for U.S. dollars. David has not engaged in a taxable transaction. Ann plays an online game and amasses virtual tools that are valuable to her avatar. The online game does not allow users to directly exchange their virtual tools for U.S. dollars, but rather they can do so using a third-party, making this a hybrid system. Ann uses a third- party exchange not affiliated with the online game to coordinate the transfer of her virtual tools to another player in exchange for U.S. dollars. The transfer is conducted by the third-party exchange and payment is mediated by a third-party payment network. Ann may have earned taxable income from the sale of these virtual tools. John is a resident of Second Life. He rents virtual property to other residents who pay him in Linden dollars. At the end of the year, John exchanges his Linden dollars for U.S. dollars and realizes a profit. John may have earned taxable income from his activities in Second Life. Bill is a bitcoin miner. He successfully mines 25 bitcoins. Bill may have earned taxable income from his mining activities. Carol makes t-shirts and sells them over the Internet. She sells a t- shirt to Bill, who pays her with bitcoins. Carol may have earned taxable income from the sale of the t-shirt. IRS, tax experts, academics, and others have identified various tax compliance risks associated with virtual economies and currencies, including underreporting, mischaracterization, and evasion. These risks are not unique to virtual economies and currencies, as they also exist for other types of transactions, such as cash transactions, where there are not always clear records or third-party tracking and reporting of transactions. The tax compliance risks we identified for virtual economies and currencies are described below. Taxpayer lack of knowledge of tax requirements. Income is generally defined as any undeniable accessions to wealth, clearly realized, and over which the taxpayers have complete dominion. The unsophisticated taxpayer may not properly identify income earned through virtual economies or currencies, such as virtual online game assets exchanged for real word currency, as taxable income. If taxpayers using virtual currencies turn to the Internet for tax help, they may find misinformation in the absence of clear guidance from IRS. For example, when we performed a simple Internet search for information on taxation of bitcoin transactions, we found a number of websites, wikis, and blogs that provided differing opinions on the tax treatment of bitcoins, including some that could lead taxpayers to believe that transacting in virtual currencies relieves them of their responsibilities to report and pay taxes. Uncertainty over how to characterize income. Even if taxpayers are aware that they may have a tax liability, they may be uncertain about the proper tax treatment of virtual transactions, according to tax experts, including academics and tax practitioners with whom we spoke. For example, characterization depends on whether the virtual economy activity or virtual currency unit is to be treated as property, barter, foreign currency, or a financial instrument. According to some experts with whom we spoke, some virtual currency transactions could be considered barter transactions, which may not be an obvious characterization to unsophisticated taxpayers. This characterization could result in noncompliance with requirements for reporting and paying tax on barter income. Uncertainty over how to calculate basis for gains. Income earned from virtual economy or currency transactions may not be taxable if it is equivalent to that from an occasional online garage sale, meaning occasional income from selling goods for less than their original purchase price. It may be difficult for individuals receiving income from virtual economies to determine their basis for calculating gains. For example, some online games require players to pay a monthly fee in exchange for use of the game and a monthly allowance of virtual currency. If a player then sells a virtual tool gained in the game for real money, calculating the basis for any taxable gain may be difficult for the unsophisticated taxpayer. Challenges with third-party reporting. Third-party information reporting requirements do not apply specifically to transactions using virtual economies or currencies. Virtual economy or currency transactions may be subject to third-party information reporting to the extent that these transactions involve the use of a third-party payment network to mediate the transaction and the taxpayer meets reporting threshold requirements. Because virtual economy and currency transactions are inherently difficult to track, including identifying the true identities of the parties to the transaction, third-party information reporting may be difficult or prohibitively burdensome for some virtual economy and currency issuers to administer. Evasion. Some taxpayers may use virtual economies and currencies as a way to evade taxes. Because transactions can be difficult to trace and many virtual economies and currencies offer some level of anonymity, taxpayers may use them to hide taxable income. Because of the limited reliable data available on their size, it is difficult to determine how significant virtual economy and currency markets may be or how much tax revenue is at risk through their usage. Some experts with whom we spoke indicated that there is potential for growth in the use of virtual currencies. Additionally, the European Central Bank recently issued a report on virtual currencies, acknowledging their potential for future growth and interaction with the real economy. If the use of virtual economies and currencies expands, it can be expected that associated revenue at risk of tax noncompliance will grow. 1326 U.S.C. § 6050W and applicable regulations define third-party payment networks. 14European Central Bank, Virtual Currency Schemes (Frankfurt am Main, Germany: October 2012). IRS has assessed the tax compliance risks from virtual economies and virtual currencies used within those economies, and developed a plan to address them in a manner consistent with internal control standards. Beginning in 2007, IRS’s Electronic Business and Emerging Issues (EBEI) policy group identified and surveyed internal and external information sources, gathered data on the industry, and collect trend information, among other efforts. EBEI determined that virtual economies presented opportunities for income underreporting and developed (1) a potential compliance strategy, including initiating a compliance improvement project to gather research data and analyze compliance trends, and (2) a potential action plan for specific compliance activities. According to IRS compliance officials, IRS ultimately decided not to pursue these actions in light of available IRS resources and other higher priority needs. Also, IRS did not find strong evidence of the potential for tax noncompliance related to virtual economies, such as the number of U.S. taxpayers involved in such activity or the amount of federal tax revenue at risk. However, in November 2009, based on EBEI having determined the need, IRS posted information on its website on the tax consequences of virtual economy transactions. The web page points out that, in general, taxpayers can receive income in the form of money, property, or services from a virtual economy, and that if taxpayers receive more income than they spend, they may be required to report their gains as taxable income. The page further states that IRS has provided guidance on the tax treatment of issues similar to online gaming activities, including bartering, gambling, business, and hobby income, and provides links to IRS publications on those topics. IRS officials who were involved in issuing this guidance reported it cost less to make an online statement pointing taxpayers to existing guidance than it would have cost to develop and publish new guidance specific to virtual economies. IRS has not assessed the tax compliance risks of open-flow virtual currencies developed and used outside of virtual economies. These types of currencies, generally, were introduced after IRS’s last review of compliance related to virtual economy transactions. According to IRS compliance officials, IRS would learn about tax compliance issues related to virtual currencies as it would any other tax compliance issue, such as IRS examiners identifying compliance problems during examinations or taxpayers requesting guidance on how to comply with certain tax requirements. To date, these processes have not resulted in IRS identifying virtual currencies used outside of virtual economies as a compliance risk that warrants specific attention. Likewise, IRS has not issued guidance specific to virtual currencies used outside of virtual economies due to competing priorities and resource constraints, and because the use of virtual currencies is a relatively recent development that requires further consideration before guidance can be issued, according to IRS’s Office of Chief Counsel and compliance officials. As previously discussed, taxpayers may be unaware that income from transactions using this type of virtual currency may be taxable, or if they are aware, uncertain on how to characterize it. By not issuing guidance, IRS may be missing an opportunity to address these compliance risks and minimize their impact and the potential for noncompliance. Given the uncertain extent of noncompliance related to virtual currency transactions, formal guidance, such as regulations, revenue rulings, or revenue notices, may not be warranted at this time. According to officials from IRS’s Office of Chief Counsel, these types of guidance require extensive review within IRS and the Department of the Treasury and, in some cases, public comment, which add to the time and cost of development. However, IRS may be able to develop informal guidance, which, according to Chief Counsel officials, requires less extensive agency review and can be based on other existing guidance. As such, IRS can develop informal guidance in a more timely and less costly manner than formal guidance, according to the officials. An example of such informal guidance is the information IRS provides to taxpayers on its website on the tax consequences of virtual economy transactions. Posting such information to its website would be consistent with IRS’s strategy for preventing and minimizing taxpayers’ noncompliance by helping them understand and meet their tax responsibilities, as outlined in IRS’s Taxpayer Assistance Blueprint. in providing taxpayers with information on the tax consequences of virtual economy transactions, a low-cost step to potentially mitigate some of the noncompliance risk associated with such transactions. The uncertainty about the extent virtual currencies are used in taxable transactions and any associated tax noncompliance means that costly compliance activities are not merited at this time. However, the fact that misinformation is circulating and the possibility of growth in the use of virtual currencies outside virtual economies suggest that it would be prudent to take low-cost steps, if available, to mitigate potential compliance risks. The type of information IRS provided about virtual economy transactions is one model. To mitigate the risk of noncompliance from virtual currencies, the Commissioner of Internal Revenue should find relatively low-cost ways to provide information to taxpayers, such as the web statement IRS developed on virtual economies, on the basic tax reporting requirements for transactions using virtual currencies developed and used outside virtual economies. We sent a draft of this report to the Acting Commissioner of Internal Revenue for comment. In written comments, reproduced in appendix I, IRS agreed with our recommendation and stated it would provide information to taxpayers on the basic tax reporting requirements for transactions involving virtual currencies by linking to existing relevant guidance. IRS noted that it was aware of the tax compliance risks associated with virtual currencies and was taking other steps, such as developing training resources for agents, to address them. IRS also provided technical comments on our draft report, which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of the Treasury, the Acting Commissioner of Internal Revenue, and other interested parties. In addition, the report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Jeff Arkin (Assistant Director), David Dornisch, Lois Hanshaw, Richard Hung, Ronald W. Jones, Donna Miller, Ed Nannenhorn, Danielle N. Novak, and Sabrina Streagle made key contributions to this report.
Recent years have seen the development of virtual economies, such as those within online role-playing games, through which individual participants can own and exchange virtual goods and services. Within some virtual economies, virtual currencies have been created as a medium of exchange for goods and services. Virtual property and currency can be exchanged for real goods, services, and currency, and virtual currencies have been developed outside of virtual economies as alternatives to government-issued currencies, such as dollars. These innovations raise questions about related tax requirements and potential challenges for IRS compliance efforts. This report (1) describes the tax reporting requirements for virtual economies and currencies, (2) identifies the potential tax compliance risks of virtual economies and currencies, and (3) assesses how IRS has addressed the tax compliance risks of virtual economies and currencies. To accomplish these objectives, GAO reviewed tax laws, IRS guidance and program documents, federal program internal control guidance, and interviewed IRS officials and knowledgeable experts on the topics. Transactions within virtual economies or using virtual currencies could produce taxable income in various ways, depending on the facts and circumstances of each transaction. For example, transactions within a "closed-flow" virtual currency system do not produce taxable income because a virtual currency can be used only to purchase virtual goods or services. An example of a closed-flow transaction is the purchase of items to use within an online game. In an "openflow" system, a taxpayer who receives virtual currency as payment for real goods or services may have earned taxable income since the virtual currency can be exchanged for real goods or services or readily exchanged for governmentissued currency, such as U.S. dollars. Virtual economies and currencies pose various tax compliance risks, but the extent of actual tax noncompliance is unknown. Some identified risks include taxpayers not being aware that income earned through virtual economies or currencies is taxable or not knowing how to calculate such income. Because of the limited reliable data available on their size, it is difficult to determine how significant virtual economy and currency markets may be or how much tax revenue is at risk through their usage. Some experts with whom we spoke indicated a potential for growth in the use of virtual currencies. Beginning in 2007, IRS assessed the tax compliance risks from virtual economies, and in 2009 posted information on its website on the tax consequences of virtual economy transactions. However, IRS has not provided taxpayers with information specific to virtual currencies because of other priorities, resource constraints, and the need to consider the use of these recently-developed currencies, according to IRS officials. By not issuing guidance, IRS may be missing an opportunity to address virtual currency tax compliance risks. Given the uncertain extent of noncompliance with virtual currency transactions, formal guidance, such as regulations, may not be warranted. According to IRS officials, formal guidance requires extensive review, which adds to development time and cost. However, IRS may be able to develop more timely and less costly informal guidance, which, according to IRS officials, requires less extensive review and can be based on other existing guidance. An example is the information IRS provides to taxpayers on its website on the tax consequences of virtual economy transactions. Posting such information would be consistent with IRS's strategy for preventing and minimizing taxpayers' noncompliance by helping them understand and meet their tax responsibilities. GAO recommends that IRS find relatively low-cost ways to provide information to taxpayers, such as on its website, on the basic tax reporting requirements for virtual currencies. In commenting on a draft of this report, IRS agreed with our recommendation.
VA operates its nursing homes in 132 locations, which are located throughout VA’s 21 health care networks. Almost all of these nursing homes are attached or in close proximity to a VA medical center. According to VA policy, VA staff at these facilities determine whether the veteran has a clinical need for nursing home care based on a comprehensive interdisciplinary clinical assessment. The interdisciplinary teams determining clinical need for nursing home care could include personnel such as the nursing home director, a social worker, nurse, physical therapist, and gerontologist. The care provided to veterans at a VA nursing home could include a range of services, including short-term postacute care needed to recover from a condition such as a stroke to longer-term care required by veterans who cannot be cared for at home because of severe, chronic physical or mental limitations. VA may also refer patients to receive nursing home care under contract from non-VA nursing homes located in the community—referred to as community nursing homes. In fiscal year 2003, VA purchased care from community nursing homes in one of two ways. VA contracted with most nursing homes through the local VA medical center. In addition, VA also contracted with some community nursing homes under its Regional Community Nursing Home initiative, in which nursing home chains in single or multiple states contract directly with VA headquarters for services at their nursing homes. In fiscal year 2003, VA contracted with 1,723 nursing homes through its medical centers and with 508 more nursing homes under its Regional Community Nursing Home initiative. Veterans may also choose to seek care in state veterans’ nursing homes. In fiscal year 2003, 109 state veterans’ nursing homes located in 44 states and Puerto Rico received VA payment to provide care. VA may refer patients to these nursing homes for care, but does not control the admission process. Veterans are admitted based on eligibility criteria established by the states. For state veterans’ nursing homes to participate in VA’s program, however, VA requires that at least 75 percent of the residents be veterans in most cases. State veterans’ nursing homes may also provide nursing home care to certain nonveterans, such as spouses of residents who are veterans. VA is authorized to pay for about two-thirds of the costs of construction of state veterans’ nursing homes and pays about a third of the costs per day to provide care to veterans in these homes. In fiscal year 2003, VA paid $56.24 per day for veterans in these state veterans’ nursing homes and awarded $174 million in grants to 16 states for renovations of existing facilities or construction of new state veterans’ homes. Veterans can also receive nursing home care financed by sources other than VA, including Medicaid and Medicare, private health or long-term care insurance, or self-financed. States design and administer Medicaid programs that include coverage for long-term nursing home care to assist with daily activities such as eating and bathing. Medicare primarily covers acute care health costs and therefore limits its nursing home coverage to short stays requiring skilled nursing home care following hospitalization. State Medicaid programs are the principal funders of nursing homes, besides patients self-financing their care. Private health insurance pays for about 11 percent of nursing home and home health care expenditures. VA nursing homes accounted for almost three-quarters of VA’s overall nursing home expenditures, or about $1.7 billion, in fiscal year 2003. Care in state veterans’ nursing homes accounted for 15 percent of nursing home expenditures, or about $352 million. Care in community nursing homes accounted for the lowest percentage of overall nursing home expenditures at 12 percent, or about $272 million. Overall, VA spent approximately $2.3 billion to provide or pay for nursing home care in VA nursing homes, community nursing homes, and state veterans’ nursing homes in fiscal year 2003. In contrast to fiscal year 1998, in fiscal year 2003 the percentage of expenditures from community nursing homes declined, whereas the percentage of expenditures for care in VA nursing homes and state veterans’ nursing homes increased. (See fig. 1.) For example, 70 percent of nursing home expenditures were accounted for by VA nursing homes in fiscal year 1998 as compared to 73 percent in 2003. Moreover, the percentage of community nursing home expenditures was 17 percent in 1998 as compared to 12 percent in 2003. During the same years, VA’s overall nursing home expenditures increased by about a third, growing from about $1.7 billion to approximately $2.3 billion. The percentage of nursing home expenditures for care in each nursing home setting varied widely by network in fiscal year 2003. (See fig. 2.) All networks spent the largest percentage of their resources on VA nursing homes. The percentage of expenditures for VA nursing homes ranged from a low of 47 percent in Network 19 (Denver) to a high of 88 percent in Network 6 (Durham). Further, the percentage of overall nursing home expenditures accounted for by community and state veterans’ nursing homes also varied widely across the networks. For example, the percentage of expenditures for community nursing homes ranged from a low of 2 percent in Network 3 (Bronx) to a high of 28 percent in Network 20 (Portland). A comparison of how networks’ percentage of expenditures on each nursing home setting changed in fiscal year 2003 as compared to fiscal year 1998 showed that networks’ changes were consistent with the VA- wide changes. In fiscal year 2003, the percentage of expenditures for VA nursing homes increased in 15 of the 21 health care networks as compared to fiscal year 1998. Similar to the overall trend, the percentage of expenditures for state veterans’ nursing homes increased in 17 of 21 networks, whereas the percentage of expenditures for community nursing homes decreased in 17 of 21 networks. The largest shift in the percentage of expenditures for the three settings occurred in Network 19 (Denver). In this network, the percentage of expenditures for VA nursing homes declined from 75 to 47 percent because of a nursing home closure during this period. For more detailed information on the percent change in nursing home expenditures for each setting and network in fiscal years 1998 and 2003, see appendix II. State veterans’ nursing homes accounted for half of VA’s overall nursing home workload—measured by average daily census—in fiscal year 2003, even though they accounted for only 15 percent of expenditures. In large part this is because VA pays a per-diem rate for care in state veterans’ nursing homes that, on average, accounts for about one-third of the cost to provide veterans nursing home care in this setting. The remaining payments made to state veterans’ nursing homes come from a number of other sources including Medicaid, Medicare, private health insurance, and patients self-financing their care. VA nursing homes provided the next largest percentage of nursing home workload, 37 percent in fiscal year 2003. Community nursing homes provided 13 percent of overall nursing home workload. Overall, VA provided or paid for 33,214 patients to receive nursing home care daily in VA nursing homes, community nursing homes, and state veterans’ nursing homes in fiscal year 2003. Since fiscal year 1998, VA has increased its use of state veterans’ nursing homes and decreased the use of VA nursing homes and community nursing homes. Overall, workload in VA’s nursing home program was 33,214 in fiscal year 2003, about 1 percent below its fiscal year 1998 workload. The percentage of nursing home workload provided in state veterans’ nursing homes increased from 43 to 50 percent. In contrast, the percentage of workload provided in VA nursing homes and community nursing homes declined. (See fig. 3.) The increase in the percentage of nursing home workload provided in state veterans’ nursing homes resulted from a number of factors. States, with the assistance of construction grants from VA, built 17 new state veterans’ nursing homes, increasing the number of beds available during this period. The increasing percentage of state veterans’ nursing home workload also occurred as a result of declines in workload in VA nursing homes and community nursing homes due to changes in VA’s use of these settings. In VA nursing homes, VA officials attributed some of the decreases in nursing home workload to an increased emphasis on postacute patients with short lengths of stay. Moreover, VA officials told us that they are providing contract community nursing home care to fewer veterans and paying for shorter contracts than in the past. The number of patients VA served in this setting declined from 28,893 to 14,032 during this period. Network officials also told us that contracts for community nursing home care are often now 30 days or less and are used primarily to transition veterans to nursing home care, which is paid for by other payers such as Medicaid. Although state veterans’ nursing homes predominate overall, networks vary widely in the percentage of workload met in different nursing home settings. For example, networks varied in their use of state veterans’ nursing homes ranging from a low of 22 percent in Network 8 (Bay Pines) to a high of 71 percent in Network 15 (Kansas City). (See fig. 4.) This variation is due, in part, to the available bed capacity of state veterans’ nursing homes in these networks. In 2003, Network 15 (Kansas City) had 1,509 state veterans’ nursing home beds compared to 420 beds in Network 8 (Bay Pines). However, wide network variation also existed in the percentage of networks’ workloads accounted for by VA nursing homes and community nursing homes. Changes in networks’ delivery of nursing home care among the three nursing home settings were consistent with VA-wide changes between fiscal year 1998 and 2003. The percentage of workload provided in state veterans’ nursing homes increased in 19 of VA’s 21 health care networks. Similar to the overall trend, the percentage of workload met in community nursing homes declined in 17 networks and declined in 13 networks for VA nursing homes. The largest shift in the percentage of workload for the three settings occurred in Network 17 (Dallas). In this network, the percentage of workload for state veterans’ nursing homes increased from 0 to 30 percent because Texas opened up four state veterans’ nursing homes during this period. For more detailed information on the percent change in nursing home workload for each setting and network in fiscal years 1998 and 2003, see appendix III. About one-third of the care VA provided in VA nursing homes was long stay in fiscal year 2003. The use of long-stay nursing home care (90 days or more) includes services needed when a person has a physical or mental disability that cannot be cared for at home. For example, veterans needing long-stay care may have difficulty performing some activities of daily living without assistance, such as bathing, dressing, toileting, eating, and moving from one location to another. They may have mental impairments, such as Alzheimer’s disease or dementia, that necessitate supervision to avoid harm to themselves or others or require assistance with tasks such as taking medications. The remainder, or two-thirds of VA nursing home care, was short-stay care (less than 90 days) in this setting. VA’s use of short- stay care includes nursing home services such as postacute care required for recuperation from a stroke or hip replacement. VA officials also told us that this care could include a number of other services such as the delivery of complex medical services such as chemotherapy, the treatment of wounds such as pressure ulcers, and end-of-life care. VA’s use of short- stay care is similar to services provided by Medicare, which provides short-term coverage, whereas VA’s use of long-stay care is similar to services provided by Medicaid, which provides long-term coverage for nursing home care. Since fiscal year 1998, VA has decreased its use of long-stay care and increased its use of short-stay nursing home care. Specifically, the percentage of nursing home care that was long stay has declined from 43 to 34 percent between fiscal years 1998 and 2003. (See fig. 5.) In contrast, the percentage of short stays provided in this setting increased from 57 to 66 percent during the same period. This shift in the amount of short-stay care is consistent with VA’s policy on nursing home eligibility that sets a higher priority on serving veterans who require short-stay postacute care. Short stay (less than 90 days) Long stay (90 days or more) Networks vary widely, however, in the percentage of VA nursing home care that is long stay. The percentage of long stays in VA nursing homes ranged from a low of 17 percent in Network 20 (Portland) to a high of 55 percent in Network 7 (Atlanta). (See fig. 6.) Network 20 (Portland) officials told us that the focus of their VA nursing homes has changed from long-stay care to short-stay transitional and rehabilitative care and as a result they are serving more veterans with shorter lengths of stay. By contrast, Network 7 (Atlanta) officials told us that several of their nursing homes provide services that are consistent with long-stay nursing home care such as providing assistance to veterans who have difficulty performing some activities of daily living such as the inability to independently eat. VA lacks information on the amount of long- and short-stay nursing home care veterans receive in community and state veterans’ nursing homes preventing it from strategically planning how best to use these nursing home settings at the national and network levels to enhance access to nursing home services. VA officials told us that while some of these data may be available at certain facilities because the facilities collect them for their own purposes, VA does not require state veterans’ nursing homes and community nursing homes to provide billing or other information that identifies individual veterans on which length of stay could be calculated. VA collects information on the payments made to community nursing homes and state veterans’ nursing homes, but does not collect the days of care a veteran receives or other individual information. VA officials told us that they receive and pay individual claims for some veterans in community nursing homes, but that in other cases VA pays for care provided by community nursing homes based on invoices, which aggregate information on the number of patients being treated by a nursing home. VA officials told us that they are in the initial planning stages of redesigning a payment system to collect information by individual veteran in community nursing homes, but that the implementation of such a system could take several years. Once completed, VA officials expect the new system to collect and report data on the total number of days individual veterans receive in community nursing homes. VA does not currently have plans to collect such data for state veterans’ nursing homes, but is exploring doing so. In fiscal year 2003, about 26 percent of veterans who received care in VA nursing homes are required to be served by the Millennium Act or VA’s policy on nursing home eligibility. Of these veterans, about 21 percent are being treated under the Millennium Act because they have a service- connected disability rating of 70 percent or greater. The act also required that VA continue to treat veterans who had been receiving nursing home care in VA facilities at the time the law was enacted about 4 percent of the veterans receiving care in fiscal year 2003 fell into this category. Further, 1 percent of veterans in VA nursing homes are required to be served based solely on VA’s policy on nursing home eligibility that extended required coverage to veterans with a 60 percent service- connected disability rating who also met other criteria. However, the vast majority of veterans—about 74 percent in fiscal year 2003—received VA nursing home care as a discretionary benefit based on available budgetary resources. VA’s policy on nursing home eligibility directs that for these veterans VA nursing homes admit, as a priority, patients who meet certain clinical and programmatic criteria: patients requiring nursing home care after a hospital episode, patients who VA determines cannot be adequately cared for in community nursing homes or home- and community-based care, and those patients who can be cared for more efficiently in VA nursing homes. The percentage of veterans receiving VA nursing home care as required by the Millennium Act or VA’s policy on nursing home eligibility varied widely across networks in fiscal year 2003. The percentage of veterans receiving this care ranged from a low of 20 percent in Network 15 (Kansas City) and Network 11 (Ann Arbor) to a high of 39 percent in Network 1 (Boston). (See fig. 7.) However, most networks were grouped closer to the lower range. Fifteen of VA’s 21 health care networks had percentages of 26 percent or less. According to VA officials, the percentage of veterans that are required to be treated may be lower in some networks because networks may choose to pay for these veterans to receive care in community nursing homes. In contrast, some networks may prefer to treat these patients in VA nursing homes. For example, officials from Network 3 (Bronx), a network with the second highest percentage at 37 percent, told us that they prefer to treat these types of veterans in VA nursing homes because they have sufficient bed capacity. VA lacks comparable information for community nursing homes or state veterans’ nursing homes on the percentage of veterans that are required to be served based on the Millennium Act or VA’s policy on nursing home eligibility even though these settings combined accounted for 63 percent of VA’s overall nursing home workload. The lack of such data prevents VA from strategically planning how best to use these nursing home settings at the national and network levels to enhance access to nursing home services. VA officials told us that while some of these data on eligibility status may be available at certain facilities because the facilities collect them for their own purposes, VA does not require that this information be collected and reported to headquarters. VA does not collect information by individual on all payments made to community nursing homes and state veterans’ nursing homes. As a result, VA cannot match individual veterans’ data from their payment system with data it currently collects on eligibility to determine the eligibility status of all veterans receiving contract care in community nursing homes and state veterans’ nursing homes. VA officials told us this type of analysis could be done if a new information system for collecting contract payments is designed and implemented to collect and report such information. Gaps in nursing home data impede VA’s ability to monitor and strategically plan for the nursing home care VA pays for nationally and at the network level. The workload in state veterans’ nursing homes and community nursing homes has grown to 63 percent of VA’s overall nursing home workload. However, VA does not have data on length of stay and the eligibility status of veterans receiving care in these settings as it has for VA nursing homes. As a result, VA cannot strategically plan how best to serve veterans it is required to serve, including those who have a 70 percent or greater service-connected disability rating, or other veterans receiving care on a discretionary basis; nor can VA strategically plan how best to use the nursing home settings to provide long- and short-stay nursing home care nationally or in individual networks. Equally important, the lack of such data and assessments hampers congressional oversight of strategic options available to VA in its nursing home care planning and its progress in meeting veterans’ needs. To help ensure that VA can provide adequate program monitoring and planning for nursing home care and to improve the completeness of data needed for congressional oversight, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take two actions: For community nursing homes and state veterans’ nursing homes, collect and report data on the number of veterans who have long and short stays, comparable to data VA currently collects on VA nursing homes. For community nursing homes and state veterans’ nursing homes, collect and report data on the number of veterans in these homes that VA is required to serve based on the requirements of the Millennium Act or VA’s policy on nursing home eligibility, comparable to data VA currently collects on VA nursing homes. We provided a draft of this report to VA for comment. In commenting on the draft, VA stated that it concurred in principle with our recommendations. VA stated that it will continue its efforts to reduce data gaps in the community nursing home and state veterans home programs, but VA did not indicate specific plans to collect data on length of stay and eligibility for its long-term care planning process. Moreover, VA stated that data other than eligibility and length of stay, such as age and disability, are most crucial for its long-term care strategic planning and program oversight. We disagree with VA’s position that eligibility and length-of-stay data are not considered most crucial and are concerned about VA’s lack of specificity regarding its intent to utilize these data. While factors such as age and disability are generally recognized as important in projecting need for nursing home care, VA needs data on veterans’ eligibility status and length of stay to determine what portion of the overall veteran need for nursing home care VA will meet nationally and in individual communities. Because VA is required to serve veterans that meet the requirements of the Millennium Act or VA policy, VA needs to project the number of these veterans seeking nursing home care from VA and determine the number of other veterans it will also serve on a discretionary basis after meeting this need. To strategically plan and provide the type of service needed in the future, VA must also project what proportion of veterans with different eligibility statuses will need short-stay or long-stay nursing home care. VA needs to use this information to determine if the nursing home care it currently pays for in VA nursing homes, contract community nursing homes, and state veterans’ nursing homes is appropriately located and provides the type of nursing home care needed by veterans. VA also noted that it is narrowing information gaps on both veterans’ eligibility status and length of stay for veterans in its community and state veterans’ nursing home programs by using data extracted from various sources to estimate these numbers. However, VA did not provide these data for our review. Given that the combined workload in these settings accounted for 63 percent of VA’s overall nursing home workload in fiscal year 2003, we believe that complete information on veterans’ eligibility status and length of stay for veterans in these settings is crucial for both strategic planning and program oversight. VA noted that one of our statements—that about one-fourth of veterans receiving nursing home care are entitled to such care under the requirements of the Millennium Act—could be misinterpreted to imply that some of these “mandatory” veterans are being displaced by veterans receiving discretionary care. We did not imply this relationship, nor did our work examine this particular issue. We are sending copies of this report to the Secretary of Veterans Affairs and appropriate congressional committees. The report is available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-7101. Another contact and key contributors are listed in appendix V. We reviewed the Department of Veterans Affairs’ (VA) nursing home program for fiscal year 2003 for VA nursing homes, community nursing homes, and state veterans’ nursing homes to determine (1) VA spending to provide or pay for nursing home care, (2) VA workload provided or paid for, (3) the percentage of nursing home care that is long and short stay, and (4) the percentage of veterans receiving care that are required to be served by the Millennium Act or VA policy. To place this information in context, you asked us to supplement our findings with information for fiscal year 1998. To address the first two objectives, we obtained data on nursing home workload and expenditures at the network level for fiscal years 1998 and 2003 from several VA headquarters offices. VA’s Geriatrics and Extended Care Strategic Healthcare Group provided us workload data for VA nursing homes and community nursing homes, as reported in VA’s Automated Management Information System. This group also gave us workload data from monthly reports completed by state veterans’ nursing homes that were maintained at the VA medical centers. These data are used by the Geriatrics and Extended Care office to provide per diem grants to state veterans’ homes. The Office of the Chief Financial Officer for the Veterans Health Administration (VHA) provided us expenditure data from VA’s Cost Distribution Report for the nursing home care provided or paid for by VA. To do our analysis, we used average daily census as a measure of workload. Average daily census is the total number of days of nursing home care provided in a year divided by the number of days in the year. For VA nursing home expenditures, we included the direct costs used to provide nursing home care plus other facility costs associated with operating the nursing home. VA nursing home expenditures excluded depreciation as well as VA headquarters and network administrative costs. To calculate community nursing home expenditures, we included all contract payments made to community nursing homes plus additional facility expenditures required to directly support the program at the local VA medical center. To calculate state veterans’ home expenditures, we included per diem payments made to state veterans’ nursing homes plus additional facility expenditures required to directly support the program at local VA medical centers. Expenditures for state veterans’ homes did not include construction grants. To determine the percentage of long and short stays in VA nursing homes in fiscal years 1998 and 2003, we obtained data on length of stay from VHA’s Extended Care Patient Treatment Files. The Patient Treatment Files include nursing home discharges for veterans who were discharged from a VA nursing home during a fiscal year, and current resident files for veterans who were not discharged by the end of a fiscal year. Using length of nursing home stay, we classified stays of 90 days or more as long stays and stays of less than 90 days as short stays. Length of stay is calculated as the number of days in a nursing home between the admission and discharge days and was given a minimum value of 1. The number of days absent from the nursing home, such as for a hospital stay, was subtracted from the length of stay. Because current residents were not discharged within the fiscal year, we calculated their lengths of stay by looking ahead into the next fiscal year. That is, we matched current residents with discharges in the next fiscal year to determine whether their stays were short or long. A current resident who was admitted on the last day of the fiscal year, for example, but was discharged after 90 days into the next fiscal year, was classified as having a long stay. If the same resident was discharged within 90 days of the next fiscal year, then the stay was classified as short. We classified nursing home stays as long for current residents who were not discharged in the next fiscal year. Our analysis for long- and short-stay care was based on nursing home stays rather than individual veterans because some veterans had multiple nursing home stays. To determine the percentage of veterans in VA nursing homes receiving care that are required to be served by the Millennium Act or VA policy, we obtained individual data on eligibility for veterans enrolled in VA’s health care system. VHA’s Office of Policy and Planning provided us these data in an enrollment file for fiscal year 2003. We merged these data with the discharge and current resident files from VHA’s Extended Care Patient Treatment Files in order to calculate the percentage of veterans receiving nursing home care that are required to be served in fiscal year 2003. Our analyses on eligibility are based on individual veterans rather than nursing home stays; because some veterans had multiple nursing home stays in a given year, we retained veterans’ first nursing home stay and eliminated other stays in that year. We used a variable from VA’s enrollment file that measures service-connected disability rating. In addition, we used variables from the file that measure whether the veteran is unemployable and whether the veteran is considered permanent and total disabled, based on disabilities not related to military service. We included the following categories of veterans in our calculation to determine the percentage of veterans receiving nursing home care required to be served by the Millennium Act or VA’s policy on nursing home eligibility: (1) veterans who had a service-connected disability rating of 70 percent or more; (2) veterans who were admitted to a VA nursing home on or before November 30, 1999; and (3) veterans who had a service- connected disability rating of 60 percent and who were also unemployable or permanent and total disabled. We did not include in our estimate veterans VA is required to serve who need nursing home care because of a service-connected disability, but who do not have a service-connected disability rating of 70 percent or more. VA did not have data on these veterans, but a VA official estimated that this group is very small based on conversations with facility staff. To supplement our knowledge of the type of nursing home care provided in VA networks, we visited two networks and five nursing homes. In Network 5 (Baltimore) we visited Washington, D.C.; Martinsburg, West Virginia; and Baltimore, Maryland. In Network 23 (Minneapolis) we visited St. Cloud, Minnesota; and Minneapolis, Minnesota. We selected these two networks because they were in different geographic regions and had variation in the types of care offered in their facilities. Within each network, we chose one nursing home that provided more long-stay nursing home care and another that provided more short-stay care. We assessed the reliability of workload and expenditure data in VA’s nursing home program, VHA’s enrollment data file, and VHA’s Extended Care Patient Treatment Files in several ways. First, we performed tests of data elements. For example, we examined the range of values for length of stay to determine whether these data were complete and reasonable. Second, we reviewed existing information about the data elements. For example, we obtained and reviewed information from VHA on data elements we used from VHA’s Extended Care Patient Treatment Files. Third, we interviewed agency officials knowledgeable about the data in our analyses and knowledgeable about VA’s nursing home program. For example, we sent network-specific nursing home workload and expenditure data provided to us by VA headquarters to each of VA’s 21 health care networks through electronic mail in December 2003. Network officials reported whether these data were accurate and indicated where they found discrepancies. Through discussions with VA headquarters and network officials we resolved the discrepancies. We determined that the data we used in our analyses were sufficiently reliable for the purposes of this report. We performed our review from January 2003 to November 2004 in accordance with generally accepted government auditing standards. Increase was less than 1 percent. Decrease was less than 1 percent. Decrease was less than 1 percent. Increase was less than 1 percent. In addition to the contact named above, Cheryl A. Brand, Pamela A. Dooley, and Thomas A. Walke made key contributions to this report. VA Long-Term Care: More Accurate Measure of Home-Based Primary Care Workload Is Needed. GAO-04-913. Washington, D.C.: September 8, 2004. VA Long-Term Care: Changes in Service Delivery Raise Important Questions. GAO-04-425T. Washington, D.C.: January 28, 2004. VA Long-Term Care: Veterans’ Access to Noninstitutional Care Is Limited by Service Gaps and Facility Restrictions. GAO-03-815T. Washington, D.C.: May 22, 2003. VA Long-Term Care: Service Gaps and Facility Restrictions Limit Veterans’ Access to Noninstitutional Care. GAO-03-487. Washington, D.C.: May 9, 2003. Department of Veterans Affairs: Key Management Challenges in Health and Disability Programs. GAO-03-756T. Washington, D.C.: May 8, 2003. VA Long-Term Care: The Availability of Noninstitutional Services Is Uneven. GAO-02-652T. Washington, D.C.: April 25, 2002. VA Long-Term Care: Implementation of Certain Millennium Act Provisions Is Incomplete, and Availability of Noninstitutional Services Is Uneven. GAO-02-510R. Washington, D.C.: March 29, 2002. VA Long-Term Care: Oversight of Community Nursing Homes Needs Strengthening. GAO-01-768. Washington, D.C.: July 27, 2001.
The Department of Veterans Affairs (VA) operates a $2.3 billion nursing home program that provides or pays for veterans' care in three settings: VA nursing homes, community nursing homes, and state veterans' nursing homes. The Veterans Millennium Health Care and Benefits Act (Millennium Act) of 1999 and VA policy require that VA provide nursing home care to veterans with a certain eligibility. Congress has expressed a need for additional data to conduct oversight of VA's nursing home program. Specifically, for all VA nursing home settings in fiscal year 2003, GAO was asked to report on (1) VA spending to provide or pay for nursing home care, (2) VA workload provided or paid for, (3) the percentage of nursing home care that is long and short stay, and (4) the percentage of veterans receiving care required by the Millennium Act or VA policy. In fiscal year 2003, VA spent 73 percent of its nursing home resources on VA nursing homes--almost $1.7 billion of about $2.3 billion--and the remaining 27 percent on community and state veterans' nursing homes. Half of VA's average daily nursing home workload of 33,214 in fiscal year 2003 was for state veterans' nursing homes, even though this setting accounted for 15 percent of VA's overall nursing home expenditures. In large part, this is because VA pays about one-third of the cost of care in state veterans' nursing homes. Community nursing homes and VA nursing homes accounted for 13 and 37 percent of the workload, respectively. About one-third of nursing home care in VA nursing homes in fiscal year 2003 was long-stay care (90 days or more). Long-stay services include those needed by veterans who cannot be cared for at home because of severe, chronic physical or mental impairments such as the inability to independently eat or the need for supervision because of dementia. The other two-thirds was short-stay care (less than 90 days), which includes services such as postacute care needed for recuperation from a stroke. VA lacks similar data for community and state veterans' nursing homes. About one-fourth of veterans who received care in VA nursing homes in fiscal year 2003 were served because the Millennium Act or VA policy requires that VA provide or pay for nursing home care of veterans with a certain eligibility. All other veterans received care at VA's discretion. VA lacks data on comparable eligibility status for community and state veterans' nursing homes even though these settings combined accounted for 63 percent of VA's overall workload. Gaps in data on length of stay and eligibility in these two settings impede program oversight.
The public faces a risk that critical services could be severely disrupted by the Year 2000 computing crisis. Financial transactions could be delayed, airline flights grounded, and national defense affected. The many interdependencies that exist among governments and within key economic sectors could cause a single failure to have adverse repercussions. While managers in the government and the private sector are taking many actions to mitigate these risks, a significant amount of work remains, and time frames are unrelenting. The federal government is extremely vulnerable to the Year 2000 issue due to its widespread dependence on computer systems to process financial transactions, deliver vital public services, and carry out its operations. This challenge is made more difficult by the age and poor documentation of the government’s existing systems and its lackluster track record in modernizing systems to deliver expected improvements and meet promised deadlines. Unless this issue is successfully addressed, serious consequences could ensue. For example: Unless the Federal Aviation Administration (FAA) takes much more decisive action, there could be grounded or delayed flights, degraded safety, customer inconvenience, and increased airline costs. Payments to veterans with service-connected disabilities could be severely delayed if the system that issues them either halts or produces checks so erroneous that it must be shut down and checks processed manually. The military services could find it extremely difficult to efficiently and effectively equip and sustain its forces around the world. Federal systems used to track student loans could produce erroneous information on loan status, such as indicating that a paid loan was in default. Internal Revenue Service tax systems could be unable to process returns, thereby jeopardizing revenue collection and delaying refunds. The Social Security Administration process that provides benefits to disabled persons could be disrupted if interfaces with state systems fail. In addition, the year 2000 also could cause problems for the many facilities used by the federal government that were built or renovated within the last 20 years that contain embedded computer systems to control, monitor, or assist in operations. For example, heating and air conditioning units could stop functioning properly and card-entry security systems could cease to operate. Year 2000-related problems have already been identified. For example, an automated Defense Logistics Agency system erroneously deactivated 90,000 inventoried items as the result of an incorrect date calculation. According to the agency, if the problem had not been corrected (which took 400 work hours), the impact would have seriously hampered its mission to deliver materiel in a timely manner. In another case, the Department of Defense’s Global Command Control System, which is used to generate a common operating picture of the battlefield for planning, executing, and managing military operations, failed testing when the date was rolled over to the year 2000. Our reviews of federal agency Year 2000 programs have found uneven progress. Some agencies are significantly behind schedule and are at high risk that they will not fix their systems in time. Other agencies have made progress, although risks remain and a great deal more work is needed. Our reports contained numerous recommendations, which the agencies have almost universally agreed to implement. Among them were the need to complete inventories of systems, document data exchange agreements, and develop contingency plans. Audit offices of some states also have identified significant Year 2000 concerns. Risks include the potential that systems supporting benefits programs, motor vehicle records, and criminal records (i.e., prisoner release or parole eligibility determinations) may be adversely affected. These audit offices have made recommendations including the need for increased oversight, Year 2000 project plans, contingency plans, and personnel recruitment and retention strategies. Data exchanges between the federal government and the states are also critical to ensuring that billions of dollars in benefits payments are made to millions of recipients. Consequently, in October 1997 the Commonwealth of Pennsylvania hosted the first State/Federal Chief Information Officer (CIO) Summit. Participants agreed to (1) use a 4-digit contiguous computer standard for data exchanges, (2) establish a national policy group, and (3) create a joint state/federal technical group. America’s infrastructures are a complex array of public and private enterprises with many interdependencies at all levels. Key economic sectors that could be seriously impacted if their systems are not Year 2000 compliant are information and telecommunications; banking and finance; health, safety, and emergency services; transportation; utilities; and manufacturing and small business. The information and telecommunications infrastructure is especially important because it (1) enables the electronic transfer of funds, (2) is essential to the service economy, manufacturing, and efficient delivery of raw materials and finished goods, and (3) is basic to responsive emergency services. Illustrations of Year 2000 risks follow. According to the Basle Committee on Banking Supervision—an international committee of banking supervisory authorities—failure to address the Year 2000 issue would cause banking institutions to experience operational problems or even bankruptcy. Moreover, the Chair of the Federal Financial Institutions Examination Council, a U.S. interagency council composed of federal bank, credit union, and thrift institution regulators, stated that banking is one of America’s most information-intensive businesses and that any malfunctions caused by the century date change could affect a bank’s ability to meet its obligations. He also stated that of equal concern are problems that customers may experience that could prevent them from meeting their obligations to banks and that these problems, if not addressed, could have repercussions throughout the nation’s economy. According to the International Organization of Securities Commissions, the year 2000 presents a serious challenge to the world’s financial markets. Because they are highly interconnected, a disruption in one segment can spread quickly to others. FAA recently met with representatives of airlines, aircraft manufacturers, airports, fuel suppliers, telecommunications providers, and industry associations to discuss the Year 2000 issue. Participants raised the concern that their own Year 2000 compliance would be irrelevant if FAA were not compliant because of the many system interdependencies. Representatives went on to say that unless FAA were substantially Year 2000 compliant on January 1, 2000, flights would not get off the ground and that extended delays would be an economic disaster. Another risk associated with the transportation sector was described by the Federal Highway Administration, which stated that highway safety could be severely compromised because of potential Year 2000 problems in operational transportation systems. For example, date-dependent signal timing patterns could be incorrectly implemented at highway intersections if traffic signal systems run by state and local governments do not process four-digit years correctly. One risk associated with the utility sector is the potential loss of electrical power. For example, Nuclear Regulatory Commission staff believe that safety-related safe shutdown systems will function but that a worst-case scenario could occur in which Year 2000 failures in several nonsafety-related systems could cause a plant to shut down, resulting in the loss of off-site power and complications in tracking post-shutdown plant status and recovery. With respect to the health, safety, and emergency services sector, according to the Department of Health and Human Services, the Year 2000 issue holds serious implications for the nation’s health care providers and researchers. Medical devices and scientific laboratory equipment may experience problems beginning January 1, 2000, if the computer systems, software applications, or embedded chips used in these devices contain two-digit fields for year representation. In addition, according to the Gartner Group, health care is substantially behind other industries in Year 2000 compliance, and it predicts that at least 10 percent of mission-critical systems in this industry will fail because of noncompliance. One of the largest, and largely unknown, risks relates to the global nature of the problem. With the advent of electronic communication and international commerce, the United States and the rest of the world have become critically dependent on computers. However, there are indications of Year 2000 readiness problems in the international arena. In September 1997, the Gartner Group, a private research firm acknowledged for its expertise in Year 2000 issues, surveyed 2,400 companies in 17 countries and concluded that “hirty percent of all companies have not started dealing with the year 2000 problem.” Based on its survey, the Gartner Group also ranked certain countries and areas of the world. According to the Gartner Group, countries/areas at level I on its scale of compliance—just getting started—include Eastern Europe, many African countries, many South American countries, and several Asian countries, including China. Those at level II—completed the inventory process and have begun the assessment process—include Japan, Brazil, South Africa, Taiwan, and Western Europe. Finally, some companies in the United States, the United Kingdom, Canada, and Australia are at levels II while others are at level III. Level III indicates that a program plan has been completed and dedicated resources are committed and in place. Although there are many national and international risks related to the year 2000, our limited review of these key sectors found a number of private-sector organizations that have raised awareness and provided advice. For example: The Securities Industry Association established a Year 2000 committee in 1995 to promote awareness and since then has established other committees to address key issues, such as testing. The Electric Power Research Institute sponsored a conference in 1997 with utility professionals to explore the Year 2000 issue in embedded systems. Representatives of several oil and gas companies formed a Year 2000 energy industry group, which meets regularly to discuss the problem. The International Air Transport Association organized seminars and briefings for many segments of the airline industry. In addition, information technology industry associations, such as the Information Technology Association of America, have published newsletters, issued guidance, and held seminars to focus information technology users on the Year 2000 problem. As 2000 approaches and the scope of the problems has become clearer, the federal government’s actions have intensified, at the urging of the Congress and others. The amount of attention devoted to this issue has increased in the last year, culminating with the issuance of a February 4, 1998, executive order establishing the President’s Council on Year 2000 Conversion. The Council Chair is to oversee federal agency Year 2000 efforts as well as act as spokesman in national and international forums, coordinate with state and local governments, promote appropriate federal roles with respect to private- sector activities, and report to the President on a quarterly basis. This increased attention could help minimize the disruption to the nation as the millennium approaches. In particular, the President’s Council on Year 2000 Conversion can initiate additional actions needed to mitigate risks and uncertainties. These include ensuring that the government’s highest priority systems are corrected and that contingency plans are developed across government. Agencies have taken longer to complete the awareness and assessment phases of their Year 2000 programs than is recommended. This leaves less time for critical renovation, validation, and implementation phases. For example, the Air Force has used over 45 percent of its available time completing the awareness and assessment phases, while the Gartner Group recommends that no more than about a quarter of an organization’s Year 2000 effort should be spent on these phases. Consequently, priority-setting is essential. According to OMB’s latest report, as of February 15, 1998, only about 35 percent of federal agencies’ mission-critical systems were considered to be Year 2000 compliant. This leaves over 3,500 mission-critical systems, as well as thousands of nonmission-critical systems, still to be repaired, and over 1,100 systems to be replaced. It is unlikely that agencies can complete this vast amount of work in time. Accordingly, it is critical that the executive branch identify those systems that are of the highest priority. These include those that, if not corrected, could most seriously threaten health and safety, the financial well-being of American citizens, national security, or the economy. Agencies must also ensure that their mission-critical systems can properly exchange data with other systems and are protected from errors that can be introduced by external systems. For example, agencies that administer key federal benefits payment programs, such as the Department of Veterans Affairs, must exchange data with the Department of the Treasury, which, in turn, interfaces with financial institutions, to ensure that beneficiary checks are issued. As a result, completing end-to-end testing for mission-critical systems is essential. OMB’s reports on agency progress do not fully and accurately reflect the federal government’s progress toward achieving Year 2000 compliance because not all agencies are required to report and OMB’s reporting requirements are incomplete. For example: OMB had not, until recently, required independent agencies to submit quarterly reports. Accordingly, the status of these agencies’ Year 2000 programs has not been monitored centrally. On March 9, 1998, OMB asked 31 independent agencies, including the Securities and Exchange Commission and the Pension Benefit Guaranty Corporation, to report on their progress in fixing the Year 2000 problem by April 30, 1998. OMB plans to include a summary of those responses in its next quarterly report to the Congress. However, unlike its quarterly reporting requirement for the major departments and agencies, OMB does not plan to request that the independent agencies report again until next year. Since the independent agencies will not be reporting again until April 1999, it will be difficult for OMB to be in a position to address any major problems. Agencies are required to report their progress in repairing noncompliant systems but are not required to report on their progress in implementing systems to replace noncompliant systems, unless the replacement effort is behind schedule by 2 months or more. Because federal agencies have a poor history of delivering new system capabilities on time, it is essential to know agencies’ progress in implementing replacement systems. OMB’s guidance does not specify what steps must be taken to complete each phase of a Year 2000 program (i.e., assessment, renovation, validation, and implementation). Without such guidance, agencies may report that they have completed a phase when they have not. Our enterprise guide provides information on the key tasks that should be performed within each phase. In January 1998, OMB asked agencies to describe their contingency planning activities in their February 1998 quarterly reports. These instructions stated that contingency plans should be established for mission-critical systems that are not expected to be implemented by March 1999, or for mission-critical systems that have been reported as 2 months or more behind schedule. Accordingly, in their February 1998 quarterly reports, several agencies reported that they planned to develop contingency plans only if they fall behind schedule in completing their Year 2000 fixes. Agencies that develop contingency plans only for systems currently behind schedule, however, are not addressing the need to ensure the continuity of a minimal level of core business operations in the event of unforeseen failures. As a result, when unpredicted failures occur, agencies will not have well-defined responses and may not have enough time to develop and test effective contingency plans. Contingency plans should be formulated to respond to two types of failures: those that can be predicted (e.g., system renovations that are already far behind schedule) and those that are unforeseen (e.g., a system that fails despite having been certified as Year 2000 compliant or a system that cannot be corrected by January 1, 2000, despite appearing to be on schedule today). Moreover, contingency plans that focus only on agency systems are inadequate. Federal agencies depend on data provided by their business partners as well as on services provided by the public infrastructure. One weak link anywhere in the chain of critical dependencies can cause major disruptions. Given these interdependencies, it is imperative that contingency plans be developed for all critical core business processes and supporting systems, regardless of whether these systems are owned by the agency. In its latest governmentwide Year 2000 progress report, issued March 10, 1998, OMB clarified its contingency plan instructions. OMB stated that contingency plans should be developed for all core business functions. On March 18, 1998, we issued an exposure draft of a guide to help agencies ensure the continuity of operations through contingency planning. The CIO Council worked with us in developing this guide and intends to adopt it for federal agency use. OMB’s assessment of the current status of federal Year 2000 progress has been predominantly based on agency reports that have not been consistently verified or independently reviewed. Without such independent reviews, OMB and others, such as the President’s Council on Year 2000 Conversion, have no assurance that they are receiving accurate information. OMB has acknowledged the need for independent verification and asked agencies to report on such activities in their February 1998 quarterly reports. While this has helped provide assurance that some verification is taking place through internal checks, reviews by Inspectors General, or contractors, the full scope of verification activities required by OMB has not been articulated. It is important that the executive branch set standards for the types of reviews that are needed to provide assurance regarding the agencies’ Year 2000 actions. Such standards could encompass independent assessments of (1) whether the agency has developed and is implementing a comprehensive and effective Year 2000 program, (2) the accuracy and completeness of the agency’s quarterly report to OMB, including verification of the status of systems reported as compliant, (3) whether the agency has a reasonable and comprehensive testing approach, and (4) the completeness and reasonableness of the agency’s business continuity and contingency planning. The CIO Council’s Year 2000 Committee has been useful in addressing governmentwide issues. For example, the Year 2000 Committee worked with the Federal Acquisition Regulation Council and industry to develop a rule that (1) establishes a single definition of Year 2000 compliance in executive branch procurement and (2) generally requires agencies to acquire only Year-2000 compliant products and services or products and services that can be made Year 2000 compliant. The committee has also established subcommittees on (1) best practices, (2) state issues and data exchanges, (3) industry issues, (4) telecommunications, (5) buildings, (6) biomedical and laboratory equipment, (7) General Services Administration support and commercial off-the-shelf products, and (8) international issues. The committee’s effectiveness could be further enhanced. For example, currently agencies are not required to participate in the Year 2000 Committee. Without such full participation, it is less likely that appropriate governmentwide solutions can be implemented. Further, while most of the committee’s subcommittees are currently working on plans, they have not yet published these with associated milestones. It is important that this be done and publicized quickly so that agencies can use this information in their Year 2000 programs. It is equally important that implementation of agency activities resulting from these plans be monitored closely and that the subcommittees’ decisions be enforced. Another governmentwide issue that needs to be addressed is the availability of information technology personnel. In their February 1998 quarterly reports, several agencies reported that they or their contractors had problems obtaining and/or retaining information technology personnel. Currently, no governmentwide strategy exists to address recruiting and retaining information technology personnel with the appropriate skills for Year 2000-related work. However, at the March 18, 1998, meeting of the CIO Council, the Office of Personnel Management (OPM) provided the council with information on the tools that are currently available to help agencies obtain and retain staff. In addition, OPM announced that its Director had agreed in principle that the Year 2000 problem was an “emergency or unusual circumstance” that would allow the Director to grant agencies waivers to allow them to rehire former federal personnel without financial penalty on a temporary basis to address the Year 2000 problem. Further, the council agreed that OPM and the Human Resources Technology Council would form a working group to look at any additional tools that could be made available to help agencies obtain and retain staff for the year 2000. This working group is tasked with providing recommendations by May 1998. Given the sweeping ramifications of the Year 2000 issue, other countries have set up mechanisms to solve the Year 2000 problem on a nationwide basis. Several countries, such as the United Kingdom, Canada, and Australia, have appointed central organizations to coordinate and oversee their governments’ responses to the Year 2000 crisis. In the case of the United Kingdom, for example, a ministerial group is being established, under the leadership of the President of the Board of Trade, to tackle the Year 2000 problem across the public and private sectors. These countries have also established public/private forums to address the Year 2000 problem. For example, in September 1997, Canada’s Minister of Industry established a government/industry Year 2000 task force of representatives from banking, insurance, transportation, manufacturing, telecommunications, information technology, small and medium-sized businesses, agriculture, and the retail and service sectors. The Canadian Chief Information Officer is an ex-officio member of the task force. It has been charged with providing (1) an assessment of the nature and scope of the Year 2000 problem, (2) the state of industry preparedness, and (3) leadership and advice on how risks could be reduced. This task force issued a report in February 1998 with 18 recommendations that are intended to promote public/private-sector cooperation and prompt remedial action. In the United States, the President’s recent executive order could serve as the linchpin that bridges the nation’s and the federal government’s various Year 2000 initiatives. While the Year 2000 problem could have serious consequences, there is no comprehensive picture of the nation’s readiness. As one of its first tasks, the President’s Council on Year 2000 Conversion could formulate such a comprehensive picture in partnership with the private sector and state and local governments. Many organizational and managerial models exist that the Conversion Council could use to build effective partnerships to solve the nation’s Year 2000 problem. Because of the need to move swiftly, one viable alternative would be to consider using the sector-based approach recommended recently by the President’s Commission on Critical Infrastructure Protection as a starting point. This approach could involve federal agency focal points working with sector infrastructure coordinators. These coordinators would be created or selected from existing associations and would facilitate sharing information among providers and the government. Using this model, the President’s Council on Year 2000 Conversion could establish public/private partnership forums composed of representatives of each major sector that, in turn, could rely on task forces organized along economic-sector lines. Such groups would help (1) gauge the nation’s preparedness for the year 2000, (2) periodically report on the status and remaining actions of each sector’s Year 2000 remediation efforts, and (3) ensure the development of contingency plans to ensure the continuing delivery of critical public and private services. As requested, we are providing preliminary information on the status of Year 2000 activities at HUD. As the principal federal agency responsible for housing, community development, and fair housing opportunity programs, HUD provides rental assistance to more than 4 million lower income households, insures mortgages for about 7 million homeowners, and helps revitalize communities and ensure equal housing access. The department had reported expenses of about $35.9 billion in fiscal year 1997, most of it for assisted and public housing. HUD also manages more than $400 billion in mortgage insurance and $460 billion in guarantees of mortgage-backed securities. HUD relies extensively on information and financial management systems to manage its programs. HUD officials recognize the importance of ensuring that its systems are Year 2000 compliant; system failures could interrupt the processing of applications for mortgage insurance, the payment of mortgage insurance claims, and the payment of rental assistance. This would place a serious strain on individuals and on the nation’s financial and banking community. The department has more than 200 separate systems, with a total of over 65 million lines of software code. Its assessment revealed that over 31 million lines of code will need to be repaired, costing an estimated $48 million and 570,000 staff hours. It recognizes that making its systems Year 2000 compliant will take aggressive action. HUD established a Year 2000 project office in June 1996. In May 1997 this office issued a readiness guide for HUD staff and contractors, dealing with all phases of a Year 2000 program. The project office also developed a strategy, endorsed by senior HUD officials, with schedules for the completion of all tasks for each system and a tracking mechanism to monitor progress. Central to this strategy was inventorying its automated systems and performing risk assessments of them. On the basis of these risk assessments, HUD officials decided what actions to take on its automated information systems; the following table summarizes the reported status of this work. Although HUD is relying on its plans to replace twelve of its mission-critical systems, its tracking and management systems do not contain information on the status of these systems replacements. Consequently, it does not know about and cannot respond quickly to development delays that could affect Year 2000 readiness. According to the department’s Year 2000 project officials, they will modify their tracking systems to provide this capability. According to HUD’s schedule for the 30 mission-critical systems undergoing renovation, testing, and certification or where renovation has not yet begun, all of these actions will be completed—and the systems implemented—by December 31 of this year. It is already, however, behind schedule on 20 of these 30 mission-critical systems. While the delays on some of these systems are of only a few days, 13 of the 20 are experiencing delays of 2 months or more. This is significant because HUD is reporting that 5 of these 13 have “failure dates”—the first date that a system will fail to recognize and process dates correctly—between August 1, 1998, and January 1, 1999. One example illustrates this point: HUD’s system for processing claims made by lenders on defaulted single family-home loans is 75 days behind schedule for renovation. The system is now scheduled to be implemented on November 4—only 58 days shy of January 1, 1999, the date that HUD has determined the current system will fail. In fiscal year 1997, this system processed, on average, a reported $354 million of lenders’ claims each month for defaulted guaranteed loans. If this system fails, these lenders will not be paid on a timely basis; the economic repercussions could be widespread. To better ensure completion of work on mission-critical systems, HUD officials have recently decided to halt routine maintenance on five of its largest systems, beginning April 1 of this year. Further, according to Year 2000 project officials, if more delays threaten key implementation deadlines for mission-critical systems, they will stop work on nonmission-critical systems in order to focus all resources on the most important ones. We concur with HUD’s plans to devote additional attention to its mission-critical systems. In conclusion, the change of century will initially present many difficult challenges in information technology and has the potential to cause serious disruption to the nation; however, these risks can be mitigated and disruptions minimized with proper attention and management. While HUD has attempted to mitigate its Year 2000 risks, several systems are behind schedule and actions must be taken to avoid widespread economic repercussions. Continued congressional oversight through hearings such as this and those that have been held by other committees in both the House and the Senate can help ensure that such attention continues and that appropriate actions are taken to address this crisis. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other members of the Committee may have at this time. Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, Exposure Draft, March 1998). Year 2000 Computing Crisis: Office of Thrift Supervision’s Efforts to Ensure Thrift Systems Are Year 2000 Compliant (GAO/T-AIMD-98-102, March 18, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Public/Private Cooperation Needed to Avoid Major Disruptions (GAO/T-AIMD-98-101, March 18, 1998). Post-Hearing Questions on the Federal Deposit Insurance Corporation’s Year 2000 (Y2K) Preparedness (AIMD-98-108R, March 18, 1998). SEC Year 2000 Report: Future Reports Could Provide More Detailed Information (GAO/GGD/AIMD-98-51, March 6, 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computers Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the nation's year 2000 computing crisis as well as the year 2000 program being implemented at the Department of Housing and Urban Development (HUD). GAO noted that: (1) the public faces a risk that critical services could be severely disrupted by the year 2000 computing crisis; (2) the federal government is extremely vulnerable to the year 2000 issue due to its widespread dependence on computer systems to process financial transactions, deliver vital public services, and carry out its operations; (3) in addition, the year 2000 also could cause problems for many of the facilities used by the federal government that were built or renovated within the last 20 years that contain embedded computer systems, to control, monitor, or assist in operations; (4) key economic sectors that could be seriously impacted if their systems are not year 2000 compliant are: information and telecommunications, banking and finance, health, safety, and emergency services, transportation, utilities, and manufacturing and small business; (5) agencies have taken longer to complete the awareness and assessment phases of their year 2000 programs than is recommended; (6) this leaves less time for critical renovation, validation, and implementation phases; (7) the Office of Management and Budget's (OMB) reports on agency progress do not fully and accurately reflect the federal government's progress toward achieving year 2000 compliance because not all agencies are required to report and OMB's reporting requirements are incomplete; (8) in January 1998, OMB asked agencies to describe their contingency planning activities in their February 1998 quarterly reports; (9) accordingly, in their 1998 quarterly reports, several agencies reported that they planned to develop contingency plans only if they fall behind schedule in completing their year 2000 fixes; (10) OMB's assessment of the current status of federal year 2000 progress has been predominantly based on agency reports that have not been consistently verified or independently reviewed; (11) given the sweeping ramifications of the year 2000 issues, other countries have set up mechanisms to solve the year 2000 problem on a nationwide basis; (12) HUD officials recognize the importance of ensuring that its systems are year 2000 compliant; systems failures could interrupt the processing of applicants for mortgage insurance, the payment of mortgage insurance claims, and the payment of rental assistance; (13) HUD established a year 2000 project office in June 1996; and (14) to better insure completion of work on mission-critical systems, HUD officials have recently decided to halt routine maintenance of five of its largest systems, beginning April 1 of this year.
DOD’s acquisition mission represents the largest buying enterprise in the world. The defense acquisition workforce—which consists of military and civilian program managers, contracting officers, engineers, logisticians, and cost estimators, among others—is responsible for effectively awarding and administering contracts totaling more than $250 billion annually. The contracts may be for major weapon systems, support for military bases, consulting services, and commercial items, among other things. A skilled acquisition workforce is vital to maintaining military readiness, increasing the department’s buying power, and achieving substantial long-term savings through systems engineering and contracting activities. DOD’s acquisition workforce experienced significant cuts during the 1990s following the end of the Cold War and, by the early 2000s, began relying more heavily on contractors to perform many acquisition support functions. DOD reported that from 1998 through 2008, the number of military and civilian personnel performing acquisition activities decreased 14 percent from about 146,000 to about 126,000 personnel. Amid concerns about the growing reliance on contractors and skill gaps within the military and civilian acquisition workforce, in April 2009, the Secretary of Defense announced his intention to rebalance the workforce mix to ensure that the federal government has sufficient personnel to oversee its acquisition process. To support that objective, DOD’s April 2010 strategic workforce plan stated that DOD would add 20,000 military and civilian personnel to its workforce by fiscal year 2015. Further, in 2008, Congress created DAWDF, codified in section 1705 of title 10 of the U.S. Code, to provide DOD a dedicated source of funding for rebuilding capacity, improving quality, and rebalancing the workforce. Congress has specified in statute the level of DAWDF funding for a given fiscal year and has adjusted that level several times. For example, the National Defense Authorization Act for Fiscal Year 2010 specified $100 million for fiscal year 2010; $770 million for fiscal year 2011; $900 million for fiscal year 2012; $1.2 billion for fiscal year 2013; $1.3 billion for fiscal year 2014; and $1.5 billion for fiscal year 2015. In the National Defense Authorization Act for Fiscal Year 2013, Congress extended the requirement for DOD to fund DAWDF through 2018 and revised the funding levels to $500 million for fiscal year 2013; $800 million for fiscal year 2014; $700 million for fiscal year 2015; $600 million for fiscal year 2016; $500 million for fiscal year 2017; and $400 million for fiscal year 2018. Currently, the law mandates $500 million in DAWDF funding for a fiscal year. However, the law also authorizes the Secretary of Defense to reduce annual funding if the Secretary determines that the mandated amount is greater than what is reasonably needed for a fiscal year. The amount may not be reduced to less than $400 million for a fiscal year. Section 1705 of title 10, U.S. Code, specifies three ways that DAWDF can be funded: Appropriations made for DAWDF. Appropriations were made for the fund in fiscal years 2010 through 2015 and were available for obligation for 1 fiscal year—the fiscal year for which they were appropriated. Credits, or funds that are remitted by DOD components from operation and maintenance accounts. Funds credited to the account are available for obligation in the fiscal year for which they are credited and in the 2 succeeding fiscal years. Transfers of expired funds. During the 3-year period following the expiration of appropriations to DOD for research, development, test and evaluation; procurement; or operation and maintenance, DOD may transfer such funds to DAWDF to the extent provided in appropriations acts. To date, Congress has granted authority for DOD to transfer operation and maintenance funds included in the appropriations acts for fiscal years 2014, 2015, and 2016 to DAWDF. Funds transferred to DAWDF are available for obligation in the fiscal year for which they are transferred and in the 2 succeeding fiscal years. Under current law, DOD is required to credit the fund $500 million for a fiscal year, as previously mentioned. However, the law directs that the amount required to be remitted by DOD components be reduced by any amounts appropriated for or transferred to DAWDF for that fiscal year. Collectively, from fiscal years 2008 through 2016, about $4.5 billion has been deposited into DAWDF using various combinations of these processes (see table 1). From fiscal years 2008 through 2016, DOD obligated about $2.2 billion— or about 60 percent—for recruiting and hiring and about $1.2 billion—32 percent—for training and development. The remaining $269 million, or 7 percent, was used for retention and recognition. To help support rebuilding the workforce, DOD obligated the most funds for recruiting and hiring through fiscal year 2015; however, in fiscal year 2016, DOD obligated slightly more for training and development than for recruiting and hiring (see fig. 1). Several organizations within DOD play key roles in the management and oversight of DAWDF (see table 2). DOD’s acquisition workforce management framework includes centralized policy, decentralized execution by the DOD components, and joint governance forums. DOD established the Senior Steering Board and the Workforce Management Group in 2008 to oversee DAWDF activities. The senior acquisition executives for the military departments, DOD functional acquisition career field leaders, and heads of major DOD agencies were designated as members of the Senior Steering Board, along with representatives from the Office of the Under Secretary of Defense (Comptroller) and Chief Financial Officer and the Office of the Under Secretary of Defense for Personnel and Readiness. This board is expected to meet quarterly and provide strategic oversight of DAWDF. The Workforce Management Group includes representatives from the offices on the Senior Steering Board, among others. It is expected to meet bimonthly and oversee DAWDF operations and management (see table 3). In June 2012, we reported on DOD’s initial implementation of DAWDF; we found that the ability of DOD components to effectively plan for and execute efforts supported by DAWDF was hindered by delays in DOD’s DAWDF funding processes and the absence of clear guidance on the availability and use of funds. We also found that HCI and Comptroller officials had differing views on how best to manage the DAWDF funding process. Comptroller officials acknowledged that they delayed sending out credit remittance notices and allowed components to delay crediting DAWDF funds. At that time, we recommended that DOD revise its DAWDF guidance to clarify when and how DAWDF funds should be collected, distributed, and used. We also recommended that DOD clearly align DAWDF’s funding strategy with the department’s strategic human capital plan for the acquisition workforce. DOD concurred with these recommendations. In October 2016, DOD completed an updated acquisition workforce strategic plan. We discuss our assessment of the extent to which DOD has taken action to address these recommendations later in this report. Further, we also recommended in June 2012 that DOD establish performance metrics for DAWDF to allow senior leadership to track how the fund is being used to support DOD’s acquisition workforce improvement goals. DOD concurred and subsequently established four metrics to track the defense acquisition workforce: (1) the size of the acquisition workforce, (2) the shape of the acquisition workforce, (3) defense acquisition workforce improvement act certification rates, and (4) the education level of acquisition workforce personnel. In its October 2016 acquisition workforce strategic plan, DOD reported that the cumulative efforts of the DOD components from fiscal year 2008 through fiscal year 2015 increased the size of the acquisition workforce by 24 percent, from about 126,000 to 156,000 personnel. The department accomplished this by hiring additional personnel, converting contractor positions to civilian positions, adding military personnel to the acquisition workforce, and administratively recoding existing personnel. DAWDF contributed to this success by helping to increase the size of the acquisition workforce and achieve a better balance of early-, mid-, and senior-career personnel. DOD reported that more than 96 percent of the acquisition workforce either met or was on track to meet certification requirements within required time frames. DOD also reported that the number of personnel with bachelor’s degrees or higher increased from 77 percent in fiscal year 2008 to 84 percent in fiscal year 2015, while those with graduate degrees increased from 29 percent to 39 percent over the same time period. These changes were accomplished during a period of budget uncertainties and sequestration, during which time DOD imposed hiring freezes and curtailed travel, training, and conferences, among other actions. In December 2015, we reported that DOD had accomplished some of its goals in rebuilding the acquisition workforce and used DAWDF to help in these efforts. While DOD increased the size of its acquisition workforce, we found that it had not reached its targets for 6 of 13 acquisition career fields, including those for 3 priority fields—contracting, engineering, and business. To ensure that DOD has the right people with the right skills to meet future needs, we recommended that DOD complete competency assessments, issue an updated acquisition workforce strategic plan, and issue guidance on prioritizing the use of funding. DOD concurred with our recommendations. We discuss DOD’s efforts to address these recommendations later in this report. DOD, enabled by recent congressional action, has improved its ability to fund DAWDF, which allowed DOD to fund DAWDF in 2 months, compared to the 24 months the credit funding process took in fiscal year 2014. Specifically, in the DOD Appropriations Act for Fiscal Year 2014, Congress authorized DOD to transfer operation and maintenance funds appropriated by the act to DAWDF consistent with section 1705 of title 10 of the U.S. Code, which permits DOD to transfer expired funds for 3 years following their expiration. The operation and maintenance funds appropriated under the act expired at the end of fiscal year 2014, so the transfer authority authorized by Congress and section 1705 of title 10 gives DOD the authority to transfer expired fiscal year 2014 funds to DAWDF in fiscal years 2015 through 2017. Congress subsequently enacted such transfer authority for both fiscal years 2015 and 2016. As a result, DOD is authorized to fund DAWDF by transferring expired fiscal year 2015 funds through fiscal year 2018 and expired fiscal year 2016 funds through fiscal year 2019. Enabled by this authority, the DOD Comptroller funded DAWDF with $477 million of expired funds in one transaction for fiscal year 2015 and $400 million of expired funds in one transaction for fiscal year 2016. The DOD Comptroller then allotted those funds to HCI in a single transaction for each of those fiscal years. Our analysis found that it took 2 months from the time that DOD submitted its written determination of the amount of DAWDF funding required for fiscal year 2015, from June 23, 2015, to when the DOD Comptroller transferred the funds into the DAWDF account on August 24, 2015. HCI officials said that as a result of the ability to transfer expired funds, they were able to distribute, or sub-allot, to components 75 percent of their approved fiscal year 2016 funding before the start of the fiscal year. In contrast, DOD often experienced delays in its previous funding process. Prior to 2015, DOD primarily relied on credits remitted by the DOD components to meet DAWDF funding requirements. To complete this process, the Comptroller calculated each component’s share of the required credit based on the amount specified in the law, offset by the amount of any annual appropriations made for DAWDF. The Comptroller then sent a notice to each component specifying the amount of the credit it was to remit by a specific date. After the components remitted the funds, the Comptroller allotted those funds to HCI, which in turn sub- allotted DAWDF funds to the components based on their approved plans for that year. When DAWDF was first enacted, credits were to be remitted to the fund not later than 30 days after the end of each fiscal quarter. In 2009, Congress amended the DAWDF legislation to require DOD components to remit credit funding not later than 30 days after the end of the first quarter of each fiscal year. However, our analysis found that under the credit funding process, the DOD Comptroller delayed sending out credit remittance notices and allowed components to delay remitting funds to DAWDF. In 2012, Comptroller officials said that they generally did not begin the process of collecting and distributing DAWDF funds before DOD received its annual appropriations to minimize the amount of credit funding collected from other DOD programs and that the funds should not be collected until necessary for DAWDF. These officials noted that this was particularly important during a continuing resolution period where DOD’s funding is often limited to the prior year’s appropriation level or less, which puts additional stress on other programs required to contribute funds to DAWDF. As a result, DOD components did not complete remitting credit funds within the time frames required by DOD for any year that the credit funding process was used. For example, the notice for fiscal year 2013 was sent in June 2013 and required components to remit credits by October 2013. However, the remittance process was not completed until September 2014, or 11 months past the required deadline. Similarly, for fiscal year 2014, the remittance process was not completed until May 2016, or 24 months after DOD submitted its written determination of the amount of DAWDF funding required for the fiscal year—the initiation of the funding process. Figure 2 compares the length of time between the initiation of the fiscal year 2014 funding process and the last credit remittance of DAWDF funding, to the fiscal year 2015 time frames for transferring expired funds. Despite the improved timeliness of funding DAWDF by transferring expired funds, DOD experienced a significant increase in the amount of carryover funds by the beginning of fiscal year 2016. Specifically, the carryover balance increased from $129 million as of October 1, 2014, to $875 million as of October 1, 2015, or nearly twice the amount DOD eventually obligated in fiscal year 2016. The growth in the amount of carryover was primarily due to the delay in the remittance of $509 million in funding for fiscal year 2014—or about 86 percent of the amount to be credited for that year—until 2015. As a result, about $869 million was deposited into HCI’s DAWDF account during fiscal year 2015, while components only obligated $358 million that year. Additional factors also contributed to the large carryover balance: According to HCI officials, DOD’s requirements were sometimes less than the minimum amount that DOD was required to put into DAWDF. For fiscal year 2014, for example, Congress mandated $800 million in DAWDF funding (which was reduced by the Secretary of Defense to $640 million, as permitted by the law), but the components only planned to obligate $498 million. HCI and component officials told us that delays in remittances and additional factors, such as hiring freezes, affected DAWDF execution for several years. Despite having $129 million in carryover funds, HCI instructed DOD components to delay execution of hiring and other planned fiscal year 2015 initiatives. HCI officials told us that because of the uncertainty of when the fiscal year 2014 credits would be remitted, they had to ensure that they had sufficient funds to pay the salaries of the personnel who had been hired in the previous 2 years using DAWDF funds. In addition, DOD components did not always obligate all of their DAWDF funding for each fiscal year. For example, for fiscal year 2015, the Defense Contract Management Agency requested $84.4 million in funding for hiring, training, and retention initiatives and was only able to obligate $61.9 million in that year. Similarly, in fiscal year 2015, the Air Force Materiel Command planned to spend $5.7 million in DAWDF funding for recruiting incentives. However, Air Force officials told us that because of delays in the remittance of fiscal year 2014 funds from the components, the Air Force was instructed to delay its hiring plans, which in turn affected the number of personnel available to accept the recruiting incentives offered. Of the $5.7 million approved for recruiting incentives that fiscal year, the command was only able to obligate $1.3 million. Overall, from fiscal years 2011 through 2016, DOD components obligated between 68 and 92 percent of the amount that HCI approved them to spend (see fig. 3). Congress acted to reduce the carryover balance in the National Defense Authorization Act for Fiscal Year 2017. The act requires, during fiscal year 2017, DOD to transfer $475 million to the Treasury from amounts available from credits to DAWDF. The act also requires DOD to transfer $225 million of the funds required to be credited to DAWDF in fiscal year 2017 to the Rapid Prototyping Fund. When coupled with DOD’s fiscal year 2017 spending and funding plans, we estimate that these actions will result in a carryover balance of about $156 million at the beginning of fiscal year 2018 in the DAWDF account, or about 26 percent of DOD’s estimated fiscal year 2018 spending (see fig. 4). With the transfer authority enacted by Congress in the DOD Appropriations Act for Fiscal Year 2016, and section 1705 of title 10, DOD is authorized, for example, to transfer operation and maintenance funds appropriated in fiscal year 2016, which expired on September 30, 2016, into DAWDF in fiscal years 2017, 2018, and 2019. If this transfer authority is not renewed to enable DOD to transfer expired funds beyond 2019, DOD stated that it will be required to revert back to the credit funding process that it had previously used. As of January 2017, there have been no changes to the guidance or any agreement between HCI and the Comptroller to address the issues we raised in our 2012 report about how to resolve the credit funding delays. During our current review, a Comptroller official reiterated that credit funding came at the expense of programs and activities that had been included in the President’s budget submission. We are not making new recommendations to address the funding process and continue to believe that DOD needs to implement the recommendation we made in 2012. Those actions, and the ability to transfer expired funds through fiscal year 2019, will provide DOD the time it needs to assess options, if necessary, to improve the credit funding processes. DOD has taken several actions to improve management and oversight processes for DAWDF over the past year, including issuing an updated acquisition workforce strategic plan and DAWDF operating guidance. DOD’s August 2016 DAWDF guidance required components to submit annual and 5-year spending plans and formalized the requirement to hold a midyear review to assess DAWDF execution and discuss best practices. However, additional opportunities exist to better align DOD’s strategic plan and DAWDF spending plans, improve consistency in how components are using the fund to pay for personnel to help manage the fund, and improve the quality of data on how the fund is being used. Specifically, DOD’s October 2016 strategic plan indicates that the department intends to shift its emphasis from rebuilding the workforce to improving its capabilities. DOD’s plan established four goals and related strategic priorities that it intends to use DAWDF to help support. The October 2016 strategic plan, however, does not identify time frames, metrics, or projected budgetary requirements associated with these goals and strategic priorities or clearly prioritize DAWDF funding toward achieving them. DOD components identified more than $3 billion in potential DAWDF funding requirements for fiscal years 2018 through 2022, which is expected to exceed available funding by $500 million over this period. Component policies and practices differ on the use of the fund to pay the salaries of staff members who help manage DAWDF and execute DAWDF initiatives. Further, component data we reviewed that were provided to HCI for inclusion in DOD’s DAWDF annual report to Congress and monthly oversight of the fund did not always accurately reflect the results of DAWDF-funded initiatives, which DOD officials attributed to resource constraints and the absence of processes to verify the data collected. In his June 2016 memorandum, the Under Secretary of Defense for Acquisition, Technology and Logistics stated that DOD intends to sustain the acquisition workforce size and continue to improve its professionalism. Similarly, DOD’s October 2016 acquisition workforce strategic plan for fiscal years 2016 through 2021 stated that DOD must sustain the acquisition workforce size, factoring in workload demand and requirements; ensure that its personnel continue to increase their professionalism; and continue to expand talent management programs to include recruitment, hiring, training, development, recognition, and retention incentives by using DAWDF and other appropriate tools. To accomplish this, the strategic plan identified four broad goals—making DOD an employer of choice; shaping the acquisition workforce; improving the quality and professionalism of the acquisition workforce; and improving workforce policies, programs, and processes—and related strategic priorities (see table 4). In 2012, we recommended that DOD clearly align DAWDF’s funding strategy with the department’s strategic human capital plan for the acquisition workforce. DOD concurred and stated that the department would continue to improve alignment of a revised funding strategy so that it supports successful execution of the workforce initiatives. However, while DOD’s October 2016 strategic plan provides an overall framework for the acquisition workforce and broadly indicates how DAWDF will be used to support these efforts, it does not identify time frames, metrics, or projected budgetary requirements associated with these goals or strategic priorities. As a part of our work on leading practices in strategic workforce planning, we have shown that determining the critical skills and competencies that agencies’ workforces need to achieve current and future agency goals and missions and identify gaps, including those that training and development strategies can help address, and developing customized strategies to recruit for highly specialized and hard-to-fill positions would be beneficial to strategic workforce planning. Because the new strategic plan does not provide a clear link between its goals for the acquisition workforce and how DAWDF funds should be used, it is unclear how the department is ensuring that DAWDF targets its most critical workforce needs. HCI and Director, Acquisition Career Management (DACM) officials noted that each military department has prepared or is preparing a workforce plan to help guide its efforts. Further, in our December 2015 report, we recommended that DOD issue an updated workforce plan that included revised career field goals and that HCI issue guidance to the components to focus hiring on priority career fields. DOD agreed that additional guidance was essential to ensure that DOD had the right people with the right skills to meet future needs, but noted that determining which career fields were a priority was most appropriately determined by the components. DOD stated that it would work with the components to issue guidance that would best meet both enterprise and specific component workforce needs. In that regard, the October 2016 strategic plan reiterated the need to shape the acquisition workforce to achieve current and future acquisition requirements but did not establish specific targets for the acquisition workforce as a whole or targets for specific career fields. HCI officials noted that DOD’s objective is to sustain the current level of the acquisition workforce and understand the workload demand. As part of its DAWDF planning process for fiscal year 2017, HCI requested data from the DACMs of each military department on their estimates for future DAWDF hiring through fiscal year 2022. Detailed breakouts by career field were not required. At the component level, we found a range of direction and data on future hiring efforts. For example: The Army’s fiscal year 2017 memorandum accompanying its call for DAWDF funding requests stated that commands should target hiring requests in the following areas: financial management, cost estimating, contracting, engineering, science and technology, and program management. The Army DACM office provided HCI an estimate of planned hires by career field for fiscal year 2017, which indicated that about 80 percent of the Army’s fiscal year 2017 DAWDF hires were planned for the contracting and engineering career fields. The Navy’s fiscal year 2017 guidance accompanying its call for DAWDF funding requests does not specify which acquisition career fields to target for hiring requests, but Navy DACM officials stated that they do obtain input from the commands regarding their acquisition workforce hiring needs. The Navy indicated that it plans to hire 255 entry-level personnel and an additional 100 in the next 5 years to address attrition in contracting and to hire engineers in new areas such as cybersecurity. The Air Force’s March 2016 DAWDF guidance highlighted that DAWDF funds would be used to support the program management, contracting, and test and evaluation career fields, among others, but it does not specify critical career fields where DAWDF hiring should be focused. Air Force officials stated that their fiscal year 2017 DAWDF program guidance did not request hiring initiatives, but the Air Force made a separate call for hiring requirements as a part of an overarching Air Force program for force renewal, which would be augmented by DAWDF for acquisition hiring. This separate call did not specify the number of hires by career field. DOD’s October 2016 acquisition workforce strategic plan noted that one of DOD’s goals was to shape the acquisition workforce to achieve current and future acquisition requirements. The absence of revised career field goals, coupled with the variation in the details provided by DOD components, underscores the importance of further management attention and guidance in this area, consistent with our December 2015 recommendation. DOD has taken a number of recent actions to mature its management and oversight of the fund, including issuing DAWDF operating guidance in August 2016 and initiating efforts to enhance long-range planning and improve component reporting requirements. HCI officials stated that until recently, HCI did not require DOD components to estimate requirements across the time period covered by the Future Years Defense Program, in part because DOD officials were uncertain whether DAWDF would be permanent. As such, HCI required components to focus their efforts on identifying initiatives that could be funded in the upcoming fiscal year. Further, HCI and DACM officials noted that because DAWDF was intended to supplement other sources of funding that may already be available, components often used the flexibility provided by DAWDF to address more short-term gaps and emerging needs for training and retention initiatives, which may not lend themselves to long-term strategic planning. For example, in fiscal year 2015, the Defense Acquisition University, the Navy, and the Army each provided cybersecurity-related training using DAWDF, including master’s level college courses in cybersecurity and a Naval Postgraduate School cybersecurity certificate program. In its August 2016 guidance, however, HCI directed each DACM to compile, among other things, annual and 5-year hiring and spending plans. According to this guidance, DOD components are to identify opportunities for using DAWDF and provide funding requests to their DACMs for review and approval. In turn, the guidance requires each DACM to ensure that DAWDF proposals are integrated and coordinated within each component. The timing of this process varies by component, but the acquisition commands we met with start this effort between February and April. HCI typically requests that the components submit upcoming fiscal year requests for review in July and meets with components in August so that plans can be approved by the end of the fiscal year in September. HCI and DACM officials stated that they are working to improve the planning process and to develop better estimates of DAWDF needs. Overall, HCI approved $579 million in fiscal year 2017 DAWDF initiatives, an increase of 20 percent over the $482 million approved for DOD’s fiscal year 2016 initiatives. According to military department DACM officials, the increase includes plans to hire additional personnel in contracting, information technology, and test and evaluation. To execute its fiscal year 2017 initiatives, DOD expects to use both carryover funds and expired funds that will be available for obligation once they are transferred to DAWDF. Further, DOD components identified more than $3 billion in potential DAWDF funding requirements from fiscal years 2018 through 2022. As submitted, the components’ collective annual DAWDF funding requirements over this period ranged from $591 million in fiscal year 2018 to $628 million by fiscal year 2022 (see table 5). Of the more than $3 billion in estimated funding requirements for fiscal years 2018 through 2022, DOD components reported that they planned to request about $1.2 billion—or about 41 percent—to hire more than 6,000 new acquisition personnel. Components also reported they plan to request about $1.4 billion—or about 46 percent—for training for the acquisition workforce, developing new talent, and targeting competency gaps, while another $258 million—or 8 percent—would be requested for retention and recognition. As reflected in Table 5, above, the components’ collective estimated annual DAWDF funding requirements exceed $500 million in each of fiscal years 2018 through 2022. HCI and the components will need to prioritize funding requests since estimated funding requirements may exceed available DAWDF funding over this period. HCI, DACM, and acquisition command officials noted that providing management and oversight is complicated by differing views over whether DAWDF funds can be used to pay for management personnel. For example, officials at the Naval Sea Systems Command—which has more than 18,000 acquisition workforce personnel—told us it has one full- time DOD civilian who is responsible for managing DAWDF and overseeing its initiatives. Command officials told us that they use their operation and maintenance budget to pay this DAWDF fund manager. We identified differences of opinion by HCI, DACM, and acquisition command officials on whether their offices could use DAWDF to help pay for personnel to manage the fund and under what circumstances. For example: HCI officials said that their office does not use DAWDF funding to pay for personnel to manage DAWDF. HCI’s August 2016 guidance indicates that DAWDF can be used to hire interns, entry-level personnel, journeymen, experts, and highly qualified experts assigned to an acquisition career field. The guidance prohibits using DAWDF to pay the base salary of any person who was a DOD acquisition workforce member as of January 28, 2008, and who has continued in the employment of the department since such time without a break in such employment of more than 1 year. The Air Force DACM approved the use of DAWDF to pay the salaries of at least 12 civilian acquisition and nonacquisition workforce personnel to manage DAWDF initiatives in hiring and training, as well as to manage DAWDF itself. These personnel were located within the Air Force Personnel Center, the Air Force Institute of Technology, and the Air Force Materiel Command. The Air Force’s March 2016 guidance specifically permits using DAWDF to pay for personnel to support and execute DAWDF initiatives. The guidance does not specify whether those personnel must be acquisition workforce members. Naval Sea Systems Command officials told us that they believed that DAWDF could not be used to hire any personnel to help manage DAWDF. However, the Navy DACM told us that the Navy as a whole had approximately 16 full-time equivalents supporting the management and execution of DAWDF. Five of these 16 positions were funded by DAWDF and were not acquisition coded. The Army Contracting Command received approval from the Army DACM to use DAWDF to pay for a DAWDF fund manager, which the Army identified as an acquisition-coded position. Army DACM officials told us that they do not believe that DAWDF funds can be used to pay for DAWDF personnel to manage the fund unless they are in acquisition-coded positions. The Army’s October 2016 guidance specifies that DAWDF may be used for new hires placed in acquisition-coded positions. Defense Logistics Agency officials told us that they believed they were not allowed to use DAWDF to pay the salaries of any personnel responsible for managing DAWDF. As a result, the Defense Logistics Agency uses its regular budget to pay the salary of the person responsible for overseeing its DAWDF initiatives. Federal internal control standards indicate that sufficient management personnel are needed to oversee federal programs and that agencies need clear and consistent policies and procedures to support accomplishment of agency objectives consistently. HCI’s August 2016 guidance, however, did not clearly indicate whether DOD components could use DAWDF to pay the salaries of personnel to manage DAWDF and under what conditions, while the guidance at the military departments is not consistent on the issue. Without additional clarification on whether DAWDF funds may be used to pay for personnel to manage DAWDF, and under what conditions, DOD components will continue to be at risk of not using DAWDF funding consistently, or, if DAWDF can be used to help manage and oversee the fund, potentially missing opportunities to enhance management and oversight. DOD’s August 2016 guidance identifies several new and maturing processes HCI will use to improve DOD’s management and oversight of DAWDF. For example, in addition to the requirement for the DOD components to submit annual and 5-year spending plans, DOD’s August 2016 guidance formalized the requirement to hold a midyear review to assess DAWDF execution and discuss best practices, among other issues, as a part of HCI’s management and oversight of the fund. HCI conducted midyear reviews in 2015 and 2016 and believes they were beneficial. Building on the midyear review, the August 2016 guidance also includes a new requirement for all DAWDF users to submit annual year in review reports beginning in 2016. Required data include a summary of the implementation of DAWDF initiatives in a standardized format, including details on hiring—by career field—and training, recruiting, and retention initiatives. According to HCI officials, these data will be used to compile the DAWDF annual report to Congress and provide more detailed and consistent information on the execution of the fund. HCI officials also hold monthly teleconferences with components to discuss funding requests and execution. Nevertheless, HCI and component officials we spoke with acknowledged shortcomings in how they collected and reported data on DAWDF-funded initiatives, citing resource constraints and the absence of processes to verify the data collected. We found as a part of our review of fiscal year 2015 DAWDF initiatives that officials managing DAWDF did not have complete and accurate data on DAWDF-funded initiatives to meet reporting requirements and oversee the fund. To help meet congressional reporting requirements and assess fund execution, HCI requested that DOD components submit highlights of their DAWDF accomplishments for the year for inclusion in DOD’s annual report to Congress. However, we found that the components did not collect complete and accurate information on their efforts, which at times were reflected in DOD’s report to Congress. For example, DOD’s fiscal year 2015 report to Congress highlighted that DAWDF funded a total of 287 student loan repayments, but the Army alone provided us documentation that it awarded student loan repayments to 762 recipients that year using DAWDF. Further, HCI requires DOD components to submit a monthly report to track program execution status. This report is intended to capture the monthly spending plan and execution against that plan, hiring data, and accomplishments associated with training and other incentives. However, DOD components did not always submit monthly reports. For example, of the 20 components that obligated DAWDF funds in fiscal year 2015, only 7 components provided HCI monthly reports in September 2015. HCI stated that this was because key DAWDF personnel transitioned to different jobs during the September and October 2015 time frame. We also found that some information provided by the components to HCI as a part of their monthly reporting requirements was either incomplete or inaccurate. For example: The Army did not report any tuition assistance recipients to HCI at the end of fiscal year 2015, but the Army Materiel Command provided documentation showing that it provided DAWDF-funded tuition assistance to 233 acquisition workforce personnel. Army officials explained that the discrepancy was because the acquisition personnel who received DAWDF-funded tuition assistance were reported under a different category. Similarly, the Air Force DACM reported to HCI that the Air Force used DAWDF to help provide student loan repayment benefits to 8 personnel in fiscal year 2015. The Air Force Materiel Command told us that there were 32 recipients in the same year, but our analysis indicated that the actual number was 40. While the actions taken to improve management and oversight processes, if fully implemented, can help address the issues we identified during our review of fiscal year 2015 initiatives, it is not clear that these new processes include specific steps to verify the data that are collected and reported. Federal internal control standards state that programs need accurate data to determine whether they are meeting their agencies’ strategic and annual performance plans and meeting their goals for accountability for effective and efficient use of resources. To meet this standard, programs require procedures to verify that required data are complete and accurate. Without taking actions to ensure that the data reported are complete and accurate, HCI and DOD components increase their risk that they will not be able to determine whether they are meeting their goals or provide accurate information for DOD’s annual DAWDF reports to Congress. DOD’s use of DAWDF is at a critical juncture, in which it will no longer use the fund to grow the workforce but rather to sustain and build on the progress made over the past 9 years. Recent congressional actions have provided more stability in the level of funding to be credited to DAWDF, authorized the transfer of expired funds to DAWDF through fiscal year 2019, and addressed the carryover of unobligated DAWDF funds. Taken as a whole, these actions should facilitate DOD’s efforts to manage DAWDF but also require that DOD take greater initiative to maximize the opportunities these changes provide. DOD’s October 2016 strategic plan provides an overall framework for the acquisition workforce and broadly indicates how DAWDF will be used to support these efforts, but it does not identify time frames, metrics, or projected budgetary requirements associated with these goals or strategic priorities. Further, the components’ future DAWDF funding requirements average more than $600 million a year through fiscal year 2022—or $100 million more per year than DOD officials told us that they can put into DAWDF for a fiscal year. Clearly aligning DAWDF funding with DOD’s strategic plan—as we recommended in 2012—may help DOD determine how to prioritize component spending plans. At the tactical level, our work found that DOD components’ guidance, practices, and views on whether they could use DAWDF to pay for personnel to help manage the fund varied. Our work also found that components collected and reported data to HCI on DAWDF-funded initiatives that had not been verified, attributable in their view to resource constraints and the absence of processes to ensure the accuracy and completeness of the data. Addressing these issues in a timely fashion is necessary for sound management of the fund and is consistent with federal internal control standards. We recommend that the Director of Human Capital Initiatives take the following two actions: Clarify whether and under what conditions DAWDF funds could be used to pay for personnel to help manage the fund. In collaboration with cognizant officials within DOD components, ensure that components have processes in place to verify the accuracy and completeness of data on the execution of initiatives funded by DAWDF. We provided a draft of this report to DOD for comment. In its comments, reproduced in appendix II, DOD partially concurred with both of the recommendations, and indicated actions that will be or have been taken to address them. DOD also provided technical comments, which we incorporated as appropriate. In response to our recommendation that DOD clarify whether DAWDF funds could be used to pay for personnel to help manage the fund, DOD stated that the next release of the DAWDF Desk Operating Guide would provide the recommended clarity. In response to our recommendation that DOD ensure that processes are in place to verify the accuracy and completeness of data on the execution of DAWDF initiatives, DOD noted that it had made significant management and other changes to improve the accuracy and completeness of data used and provided by components on the execution of initiatives funded by DAWDF. DOD noted that it had, among other actions, assigned a full-time DAWDF program manager; issued guidance to improve data validity, consistency, and alignment; and instituted a midyear execution review and established a requirement for a data-driven year in review. Several of these changes were made or were in process in 2016, which we identified in our draft report. If these management and policy changes are effectively translated into practice, we believe these actions will address the intent of the recommendation. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, the Air Force, and the Navy; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Under Secretary of Defense (Comptroller) and Financial Management; and the Director of Human Capital Initiatives. In addition, the report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) the process the Department of Defense (DOD) uses to fund the Defense Acquisition Workforce Development Fund (DAWDF) and (2) DOD’s management and oversight of DAWDF initiatives. To conduct our work, we selected the acquisition command within each military department that had the largest number of acquisition workforce personnel in fiscal year 2015: Department of the Army, Army Materiel Command; Department of the Navy, Naval Sea Systems Command; and Department of the Air Force, Air Force Materiel Command. We also selected the Defense Logistics Agency, which had the second largest number of acquisition personnel of the other defense agencies that obligated DAWDF funds in fiscal year 2015. Collectively, the three military departments and the Defense Logistics Agency comprised 88 percent of DOD’s acquisition workforce and received the majority of DAWDF funds in fiscal year 2015. To examine the process DOD uses to fund DAWDF, we reviewed relevant legislation as well as DOD-wide and component guidance on the use of DAWDF funding. We analyzed the amount of carryover and estimated carryover funds from fiscal years 2008 through 2018. We reviewed key documents, including DOD funding authorization documents and DOD’s annual reports to Congress on DAWDF from fiscal years 2008 through 2015. We assessed the timeliness of DOD’s funding process by comparing data on key points in the funding process, including when the DOD Comptroller deposited funds into the DAWDF account, and analyzed documentation showing when the funds were allotted and obligated from fiscal years 2008 through 2016. To evaluate DOD’s DAWDF management and oversight processes, we took several steps. We reviewed relevant legislation, DOD’s 2010 DAWDF guidance for components and its August 2016 DAWDF Desk Operating Guide, which includes information on the annual planning, proposal, review, approval, and funding processes; we also reviewed guidance issued by each of the military departments. We also assessed the Department of Defense Acquisition Workforce Strategic Plan, FY 2016 – FY 2021, which was completed in October 2016. We analyzed DAWDF future spending estimates from fiscal year 2017 through fiscal year 2022 submitted to the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics - Human Capital Initiatives (HCI) by each of the military departments and other defense agencies. In addition, we analyzed monthly DAWDF spending reports from fiscal year 2015, DAWDF midyear review documentation from fiscal years 2015 and 2016, and briefing materials from DAWDF governance meetings from fiscal years 2015 and 2016. We also interviewed officials from HCI, the offices of the Directors for Acquisition Career Management from each military department, and acquisition command officials about DOD’s long-term strategic planning efforts related to DAWDF. Further, we used Standards for Internal Control in the Federal Government to identify criteria regarding the types of control activities that should be in place to verify data. These criteria include top-level reviews of actual performance, reviews by management at the functional or activity level, establishment and review of performance measures and indicators, proper execution of transactions, and other steps to ensure the completeness, accuracy, and validity of reported data. To evaluate DOD’s DAWDF program execution and adherence to reporting requirements, we compared DAWDF data submitted by DOD components at the end of fiscal year 2015 with data obtained from officials responsible for executing DAWDF initiatives, HCI’s monthly reporting requirements for DAWDF, and data reported in DOD’s DAWDF fiscal year 2015 annual report to Congress. In addition, we spoke with HCI and component officials about the quality of the data. We describe instances of incomplete and inaccurate data where appropriate in our report. To obtain an understanding of how the planning, review, and implementation processes for DAWDF initiatives worked, we selected a nongeneralizable sample of 10 fiscal year 2015 DAWDF initiatives. Our sample included 3 initiatives from each department—1 from each of the three major initiative categories—that were among initiatives with the highest dollar values. In addition, we selected 1 initiative from the Defense Logistics Agency. (See table 6.) For these initiatives, we collected and reviewed relevant documentation and data and interviewed cognizant component officials. To verify whether military department recipients of DAWDF-funded tuition assistance and student loan repayment were members of the defense acquisition workforce in the year that they received the benefit, we selected a nongeneralizable sample of 276 recipients across the military departments for fiscal year 2015 from the lists of recipients provided by the military departments. Because the programs are managed separately by each military department, we selected student loan repayment program recipients and tuition assistance recipients from each of the military departments (see table 7). Because we used a nongeneralizable sample, our findings cannot be used to make inferences about all DAWDF recipients. To determine whether these recipients were members of the acquisition workforce, we verified that the recipients were in DataMart, DOD’s acquisition workforce database, the year that they received the benefit. To assess the reliability of DOD’s DataMart data, we (1) reviewed existing information about the data and the system that produced them, (2) reviewed the data for obvious errors in accuracy and completeness, and (3) worked with agency officials to identify any data problems. When we found discrepancies, we brought them to DOD’s attention and worked with DOD officials to correct the discrepancies. For example, in those instances where we could not verify a name in DataMart, we contacted military department officials to obtain additional information that allowed us to confirm that those recipients were a part of the acquisition workforce. We also interviewed agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To address both objectives, we interviewed representatives from the following DOD organizations during our review: Office of the Secretary of Defense Office of the Under Secretary of Defense for Acquisition, Technology Office of the Under Secretary of Defense (Comptroller) and Chief Defense Finance and Accounting Service Office of the Joint Chiefs of Staff (J-4) Director, Acquisition Career Management Army Acquisition Support Center Research, Development and Engineering Command Director, Acquisition Career Management Naval Sea Systems Command Director, Acquisition Career Management Air Force Materiel Command 4th Estate Director, Acquisition Career Management We conducted this performance audit from March 2016 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Cheryl Andrew (Assistant Director), James D. Ashley, Emily Bond, Lorraine Ettaro, Meafelia P. Gusukuma, Kristine Hassinger, Katheryn Hubbell, Heather B. Miller, Roger R. Stoltz, Roxanna Sun, Alyssa Weir, Nell Williams, and Lauren Wright made key contributions to this report.
Congress established DAWDF in 2008 to provide DOD with a dedicated source of funding to help recruit and train members of the acquisition workforce. Since 2008, DOD has obligated more than $3.5 billion to meet those objectives. However, in 2012, GAO reported that DOD's ability to execute hiring and other initiatives had been hindered by delays in the DAWDF funding process, resulting in a large amount of unused funds being carried over from year to year. GAO was asked to review DOD's management of DAWDF. This report examines (1) the process DOD uses to fund DAWDF and (2) DOD's DAWDF management and oversight. GAO analyzed relevant legislation; DOD's, the military departments', and other defense agencies' guidance and processes; and DAWDF budget and initiative execution data. GAO also interviewed DOD officials and conducted a nongeneralizable sample of 10 fiscal year 2015 DAWDF initiatives based on type of initiative and dollar value. The Department of Defense (DOD), enabled by congressional action, has improved the timeliness of the funding process for the Defense Acquisition Workforce Development Fund (DAWDF). For fiscal year 2015, DOD was authorized to transfer expired funds, which allowed it to fund DAWDF in 2 months. In contrast, for fiscal year 2014, DOD relied on the military departments and other defense agencies (referred to as components) to remit funds to the DOD Comptroller, which took 24 months to complete. As a result, hiring, training, and other initiatives were delayed. Congress also took action in 2016 to reduce the amount of funding carried over from year to year, which totaled $875 million at the beginning of fiscal year 2016, or nearly twice the amount DOD eventually obligated for that year (see figure). GAO estimates that the amount of carryover funds at the beginning of fiscal year 2018 will be reduced to about $156 million. In the past year, DOD has taken several actions to improve its management and oversight of DAWDF, including issuing an updated acquisition workforce strategic plan and DAWDF operating guidance. For example, DOD's August 2016 DAWDF guidance required components to submit annual and 5-year spending plans and formalized the requirement to hold a midyear review to assess DAWDF execution and discuss best practices. However, GAO found that DOD components identified more than $3 billion in potential DAWDF funding requirements for fiscal years 2018 through 2022, which may exceed available funding over this period. Clearly aligning DAWDF funding with DOD's strategic plan—as GAO recommended in June 2012—may help DOD determine how to prioritize these requirements. GAO also found that components' guidance, practices, and views on whether they could use DAWDF to pay for personnel to manage the fund varied. Further, GAO found components did not have processes to verify the accuracy and completeness of data reported on DAWDF-funded initiatives. Internal control standards indicate that consistent policies and accurate data can help ensure that funds are used effectively and as intended. Without such controls, DOD could be missing opportunities to use DAWDF more effectively to improve its acquisition workforce. DOD should (1) clarify whether DAWDF funds could be used to pay for personnel to help manage the fund and (2) ensure that DOD components have processes in place to verify the accuracy and completeness of DAWDF data. DOD partially concurred with both recommendations, and has taken or plans to take actions to address them.
In light of the prominent hazing incidents previously noted, Congress, in the National Defense Authorization Act for Fiscal Year 2013, directed that each Secretary of a military department (and the Secretary of Homeland Security in the case of the Coast Guard) submit a report on hazing in each Armed Force under the jurisdiction of the Secretary. Specifically, Congress specified that each Armed Force report include, among other things, an evaluation of the hazing definition contained in an August 1997 Secretary of Defense policy memorandum on hazing, a discussion of their respective policies for preventing and responding to incidents of hazing, and a description of the methods implemented to track and report, including report anonymously, incidents of hazing in the Armed Forces. In response, each service provided reports to Congress in May and July 2013 addressing the requirements of the Act. For example, the Navy, the Marine Corps, and the Coast Guard concurred with DOD’s 1997 definition of hazing. To address all behaviors that involve mistreatment in a single policy, the Army recommended revising the hazing definition to include bullying. The Air Force recommended the hazing definition be revised to better align with the hazing definitions used by the states because DOD’s broader definition risked creating a perception that hazing is a larger problem in the military than it actually is according to the civilian understanding of hazing. The Coast Guard also noted in its report to Congress that it developed its policy to reflect the provisions contained in DOD’s hazing policy. With respect to the feasibility of establishing a database to track, respond to, and resolve incidents of hazing, the Army report stated that existing databases and legal tracking systems are sufficient for tracking hazing incidents. The Navy reported that although it has a tracking database in use, a comprehensive database for all services may be beneficial in combatting hazing. The Marine Corps report stated that the Marine Corps currently uses a service-wide database for tracking and managing all allegations of hazing. The Air Force report stated that it will examine the costs and benefits of establishing a database to track, respond to, and resolve hazing incidents once a common definition and data elements are developed. The Coast Guard stated that existing systems provide adequate management of hazing incidents. Lastly, in response to the requirement to provide any recommended changes to the Uniform Code of Military Justice (UCMJ) or the Manual for Courts-Martial, the Army, Navy, Marine Corps, and Air Force reports stated that they supported inserting a provision in the Manual for Courts- Martial discussion section of Article 92 of the UCMJ that would enable incidents of hazing to be charged as violations of Article 92 (violation of or failure to obey a lawful general order or regulation). All of the armed services agreed that a separate enumerated offense of the UCMJ for hazing would be duplicative. In addition, in May 2012, the House Appropriations Committee Report accompanying the DOD Appropriations Bill, 2013, expressing concern about reports of hazing in the armed services, directed the Secretary of Defense to provide a report to the Committee on the incidence of hazing, harassment, and mistreatment of servicemembers, as well as a review of the policies to prevent and respond to alleged hazing incidents. In response to this requirement, and in addition to the service reports, in September 2013, the Undersecretary of Defense for Personnel and Readiness provided a report to Congress that summarized the armed service reports to Congress. In addition, the report noted that DOD commissioned the RAND Corporation to conduct a study that would include an assessment of the 1997 definition of hazing and subsequent recommendation on a DOD definition of hazing, as well as an evaluation of the feasibility of establishing a DOD-wide database to track hazing incidents, common data elements, and requirements to include in the revision of the 1997 policy memorandum for uniformity across the services. There is no specific article under the UCMJ that defines and prohibits hazing. However, since at least 1950, hazing has been punishable under various punitive articles included in the UCMJ such as Article 93, Cruelty and Maltreatment. To constitute an offense under Article 93, the accused must be cruel toward, or oppress, or maltreat a victim that is subject to his or her orders. Depending on the individual facts and circumstances of the case, hazing could also be charged under other punitive articles, such as Article 128, Assault. Commanders have multiple options to respond to allegations of hazing in their units. After receiving a hazing complaint, commanders or other authorities must promptly and thoroughly investigate the allegation, according to the DOD policy. If the allegation is unsubstantiated, the case is typically dropped. If the investigation substantiates the allegations, the commander must take effective and appropriate action, which may include adverse administrative action, non-judicial punishment, court- martial, or no action, among others. An allegation that is initially deemed substantiated does not necessarily result in punishment for the offender because a servicemember could be found not guilty at non-judicial punishment or court-martial, among other reasons. While we have not reported on hazing in the military since 1992, we have issued multiple reports and made numerous recommendations related to DOD’s and the Coast Guard’s efforts to prevent and respond to the sometimes correlated issue of sexual assault. In particular, our March 2015 report on male servicemember victims of sexual assault reported that hazing incidents may cross the line into sexual assault. We noted that service officials and male servicemembers at several military installations gave us examples of recent incidents involving both hazing and sexual assault. We found that a series of hazing incidents may escalate into a sexual assault and that service officials stated that training on hazing-type activities and their relationship to sexual assault would be particularly beneficial to males in that it might lead to increased reporting and fewer inappropriate incidents. Among other things, we recommended that DOD revise its sexual assault prevention and response training to more comprehensively and directly address how certain behavior and activities, such as hazing, can constitute sexual assault. DOD concurred with this recommendation, but did not state what actions it planned to take in response. The National Defense Authorization Act for Fiscal Year 2016 subsequently included a provision requiring the Secretary of Defense, in collaboration with the Secretaries of the Military Departments, to develop a plan for prevention and response to sexual assaults in which the victim is a male servicemember. This plan is required to include sexual assault prevention and response training to address the incidence of male servicemembers who are sexually assaulted and how certain behaviors and activities, such as hazing, can constitute a sexual assault. Each of the military services has issued policies to address hazing incidents among servicemembers consistent with DOD’s 1997 hazing policy. However, DOD does not know the extent to which these policies have been implemented because the military services, with the exception of the Marine Corps, have not conducted oversight by regularly monitoring policy implementation. The Coast Guard has issued a policy to address hazing incidents, but it likewise has not conducted oversight by regularly monitoring policy implementation. In addition, the military services’ hazing policies are broad and servicemembers may not have enough information to determine whether instances of training or discipline may be considered hazing. In August 1997, the Secretary of Defense issued a memorandum on DOD’s policy that defined and provided examples of what did and did not constitute prohibited hazing conduct. DOD’s policy also specified that commanders and senior noncommissioned officers would promptly and thoroughly investigate all reports of hazing and that they would take appropriate and effective action on substantiated allegations. Further, it required the Secretaries of the Military Departments to ensure that DOD’s hazing policy was incorporated into entry-level enlisted and officer military training, as well as professional military education. Coast Guard officials told us that the Department of Homeland Security (DHS) has not issued any hazing-related policy applicable to the Coast Guard, and DHS officials confirmed that no such policy had been issued, though as we discuss further in this report, the Coast Guard issued policies that reflect DOD’s 1997 hazing policy. From 1997 through 2014, each of the military services issued or updated applicable policies to reflect DOD’s position on hazing and its requirements for addressing such incidents. The military services updated their policies for various reasons, such as implementing tracking requirements or defining and prohibiting bullying along with hazing. The Coast Guard also issued a policy during this timeframe that, as noted in its 2013 report to Congress on hazing, mirrors the policy developed by DOD. Each of the services made their policies punitive so that a violation of the military service regulation could also be charged under the UCMJ as a violation of Article 92, Failure to obey an order or regulation. More recently, in December 2015 DOD issued an updated hazing and bullying memorandum and policy, which among other things included an updated definition of hazing, defined bullying, and directed the secretaries of the military departments to develop instructions to comply with the memorandum. Figure 1 provides additional details on the timeline of DOD, military service, and Coast Guard hazing policies and relevant congressional actions since 1997. The Coast Guard issued a policy in 1991 that required hazing awareness training. Each of the military services’ policies (1) include the same or a similar definition of hazing as the one developed by DOD, (2) require that commanders investigate reported hazing incidents, and (3) direct that all servicemembers receive training on the hazing policy. Though not required, the Army, the Navy, and the Marine Corps hazing policies contain guidance and requirements that supplement several key provisions in DOD’s policy. For example, in addition to the examples of hazing included in DOD’s policy, the Army’s 2014 regulation update explicitly prohibits hazing via social media or other electronic communications, and makes a distinction between hazing and bullying, which it also prohibits. Further, the Army’s, the Navy’s, and the Marine Corps’ hazing policies and guidance include requirements for commanders and senior noncommissioned officers beyond the general investigative and disciplinary responsibilities specified by DOD. Specifically, the Army’s regulation requires its commanders to seek the counsel of their legal advisor when taking actions pursuant to the hazing policy. Navy policy on reporting hazing incidents directs all commands to submit reports of substantiated hazing incidents for tracking by the Navy’s Office of Hazing Prevention. The Marine Corps’ order requires commanding officers to report both substantiated and unsubstantiated hazing incidents to Marine Corps headquarters. In October 1997, the Air Force reissued the Secretary of Defense’s memorandum and DOD’s hazing policy with a cover letter from the Chief of Staff of the Air Force that underscored that hazing is contrary to good order and discipline, that it would not be tolerated, and that commanders and supervisors must stay engaged to ensure that hazing does not occur within the Air Force. Regarding training, the Army’s, the Navy’s, and the Marine Corps’ policies supplement DOD’s requirement that the topic of hazing be incorporated into entry-level enlisted and officer training and Professional Military Education. Specifically, the Army’s hazing regulation requires that commanders at a minimum conduct hazing awareness training on at least an annual basis as part of the Army’s Equal Opportunity training requirements. The Department of the Navy’s instruction requires that hazing awareness training be incorporated into leadership training and commander’s courses, and the Marine Corps’ order includes similar requirements, adding that hazing awareness training also be included in troop information programs and in unit orientation. By including the DOD hazing policy, the Air Force memorandum includes the training requirements specified by DOD, and an Air Education and Training Command policy requires annual hazing awareness training within Air Force training units. In September 2011, the Coast Guard updated its Discipline and Conduct Instruction to include its policy prohibiting hazing. As previously noted, the Coast Guard’s instruction mirrors guidance set forth in a 1997 Secretary of Defense Policy Memorandum, including DOD’s definition of hazing and examples of what does and does not constitute prohibited hazing conduct. Like DOD’s policy, the Coast Guard’s instruction also specifies that commanders who receive complaints or information about hazing must investigate and take prompt, effective action and are to incorporate hazing awareness training into the annual unit training. While similar in some respects, the Coast Guard’s hazing instruction contains guidance and requirements that go beyond the policy issued by DOD. For example, in addition to a requirement to investigate alleged incidents, the Coast Guard’s policy identifies penalties that may result from hazing that, depending on the circumstances, range from counseling to administrative discharge procedures. Further, the Coast Guard’s instruction also requires that a discussion about hazing be incorporated into existing recruit, officer, and leadership training curricula. The Army, the Navy, and the Marine Corps hazing policies state that servicemembers should report hazing complaints within the chain of command, such as to their commander. The Army’s regulation also states that servicemembers may report hazing complaints to law enforcement or the inspector general. The Coast Guard’s hazing instruction states that every military member—to include victims of or witnesses to actual or attempted hazing—must report such incidents to the appropriate level within the chain of command. Headquarters officials from each military service and the Coast Guard told us that servicemembers may report hazing complaints through existing channels, such as the commander, law enforcement, inspector general, or the equal opportunity office, among others. In some cases these channels may be independent of or above the level of their commands, such as an inspector general at a higher level than their own command’s inspector general. In other cases, such as an equal opportunity advisor in their own command, the reporting channel would not be independent of the command. These officials said that in most cases, there are means to report hazing complaints anonymously to many of these channels, such as anonymous inspector general hotlines. In addition, because hazing can be associated with rites of passage and traditions, the Army, the Navy, and the Marine Corps—either in their policies or through supplemental guidance—permit command-authorized rituals, customs, and rites of passage that are not cruel or abusive, and require commanders to ensure that these events do not include hazing. The Army’s policy states that the chain of command will ensure that traditional events are carried out in accordance with Army values, and that the dignity and respect of all participants is maintained. A quick reference legal handbook issued by the Department of the Navy provides guidance to Navy and Marine Corps commanders for conducting ceremonies and traditional events as part of its section on hazing prevention. Although the Air Force instruction on standards does not specifically address traditions and customs, according to officials in the Air Force Personnel Directorate office, commanders are responsible for ensuring the appropriateness of such observances. During a site visit to Naval Base Coronado, we met with the commander of the USS Carl Vinson, who issued local guidance that was more specifically tailored to a particular event or ceremony under his command. Prior to a recent ‘crossing the line’ ceremony—marking the first time a sailor crosses the equator or the international dateline—the commander of the USS Carl Vinson issued formal guidelines for conducting the ceremony that designated oversight and safety responsibilities, listed permissible and non-permissible activities, and noted that participation was voluntary. Specifically, among other things the guidance stated that servicemembers may perform a talent show, provided that it does not include sexually suggestive props, costumes, skits, or gags. The guidance also stated that servicemembers that do not wish to participate in the events may opt out and that non-participants are not permitted to observe the ceremony or any related activities. The Coast Guard’s hazing instruction permits command-authorized rituals, customs, and rites of passage that are not cruel or abusive, and requires commanders to ensure that these events do not include hazing. Specifically, the Coast Guard’s hazing instruction states that traditional ceremonies, including Chief’s Initiations and equator, international dateline, and Arctic and Antarctic Circle crossings, are authorized, provided that commands comply with governing directives when conducting such ceremonies. The instruction further states that commanding officers shall ensure these events do not include harassment of any kind that contains character degradation, sexual overtones, bodily harm or otherwise uncivilized behavior. In its 2013 report to Congress, DOD said that it would develop an update to the 1997 policy memorandum on hazing, to be followed by an instruction outlining its hazing policy. The Office of the Under Secretary of Defense for Personnel and Readiness in 2013 formed a hazing working group, led by the Office of Diversity Management and Equal Opportunity (ODMEO), to update DOD’s hazing policy. The updated policy was issued as a memorandum in December 2015. The updated policy distinguishes between hazing and bullying and includes a hazing and bullying training requirement, among other things. With the issuance of the memorandum, the officials said they will begin working, through the hazing working group, on a DOD instruction on hazing that will replace the updated memorandum. DOD and the Coast Guard do not know the extent to which hazing policies have been implemented because—with the exception of policy compliance inspections conducted by the Marine Corps—DOD, the military services and the Coast Guard do not conduct oversight by regularly monitoring the implementation of their hazing policies. Standards for Internal Control in the Federal Government states that management designs control activities that include the policies, procedures, techniques, and mechanisms that enforce management’s directives to achieve an entity’s objectives. Although most service policies designated implementation responsibilities, DOD, the military services, and the Coast Guard generally do not know the extent or consistency with which their policies have been implemented because— with the exception of the inspections conducted by the Marine Corps— they have not instituted headquarters-level mechanisms to regularly monitor policy implementation, such as by collecting local command data on hazing policy implementation or conducting site inspections to determine the extent to which the policies have been implemented, among other things. DOD’s 2013 report to Congress on hazing stated that prevention of hazing is under the purview of the Under Secretary of Defense for Personnel and Readiness. However, DOD has not conducted oversight by regularly monitoring the implementation of its hazing policy by the military services, and it has not required that the military services regularly monitor the implementation of their hazing policies. Likewise, the Coast Guard has not required regular headquarters-level monitoring of the implementation of its hazing policy. We reviewed each of the military services’ hazing policies and found that the Army, the Navy, and the Marine Corps policies specify some implementation responsibilities. Specifically, the Army’s hazing regulation states that commanders and supervisors at all levels are responsible for its enforcement. However, according to an official in the Army office that developed the Army’s hazing policy, there is no service-wide effort to oversee the implementation of the hazing regulation. The Navy’s instruction designates commanders and supervisors as responsible for ensuring that all ceremonies and initiations in their organizations comply with the policy. The Navy’s instruction also identifies the Chief of Naval Operations as being responsible for ensuring that the hazing policy is implemented. However, officials in the Navy’s office that develops hazing policy said there is no service-wide effort to specifically oversee implementation of the hazing policy. The Marine Corps’ order designates the Deputy Commandant for Manpower and Reserve Affairs, the Commanding General, and the Marine Corps Combat Development Command, as well as commanding officers, and officers-in-charge as being responsible for policy implementation. In addition, the Marine Corps reported conducting regular inspections of command implementation of the Marine Corps hazing policy as a means of overseeing service-wide implementation of its hazing policy. The Air Force’s hazing policy does not contain specific designations of responsibility. However, the Air Force policy memorandum states that commanders and supervisors must stay engaged to make sure that hazing doesn’t occur in the Air Force and the Air Force instruction on standards states that each airman in the chain of command is obligated to prevent hazing. As with the Army and Navy, the Air Force hazing policy memorandum does not include requirements to regularly monitor policy implementation across the service. The Coast Guard’s hazing instruction generally identifies training centers, commanders, and Coast Guard personnel as being responsible for its implementation. Specifically, the instruction specifies that training centers are responsible for incorporating hazing awareness training into curricula administered to different levels of personnel. In addition to their investigative responsibilities, the instruction also states that commanding officers and supervisors are responsible for ensuring that they administer their units in an environment of professionalism and mutual respect that does not tolerate hazing of individuals or groups. Lastly, the instruction charges all Coast Guard personnel with the responsibility to help ensure that hazing does not occur in any form at any level and that the appropriate authorities are informed of any suspected policy violation. However, the Coast Guard reported that it has not regularly monitored hazing policy implementation. An official in the Army’s Equal Opportunity office stated that although its office has responsibility for hazing policy, the office has not been tasked with, and thus has not developed, a mechanism to monitor implementation of its policy. However, the official acknowledged that it could be helpful to have more information on the extent to which elements of such policies are being incorporated by its commands and at its installations. The official added that ways to do this could include collecting and reviewing data from commands on policy implementation, or conducting inspections, though the official noted that inspections would require additional resources. Officials in the Navy’s Office of Behavioral Standards stated that the responsibility for compliance with the hazing policy is delegated to the command level, with oversight by the immediate superior in command, but our review found that the Navy did not have a mechanism to facilitate headquarters-level monitoring of hazing policy implementation. In contrast, the Marine Corps Inspector General, in coordination with the Marine Corps Office of Manpower and Reserve Affairs, conducts service- wide inspections to determine, among other things, whether the provisions of the Marine Corps’ hazing policy are being implemented consistently and to ensure that commands are in compliance with the requirements of the hazing policy. Marine Corps Inspector General officials told us that the Marine Corps Inspector General has inspected command programs to address hazing since June 1997, with the initial issuance of the Marine Corps’ hazing order. Specifically, the Inspector General checks command programs against a series of hazing-related items, such as whether the command includes hazing policies and procedures in its orientation and annual troop information program and whether the command has complied with hazing incident reporting requirements. These inspections do not necessarily cover all aspects of hazing policy implementation. For example, Marine Corps Inspector General officials told us they do not consistently review the content of training materials, although they do review training rosters to verify that servicemembers have received hazing training. However, the inspections provide additional information to Marine Corps headquarters officials on the implementation of hazing policy by commands. Marine Corps Manpower and Reserve Affairs officials also told us that they will begin consistently reviewing training content after they standardize the training. Marine Corps Inspector General officials stated that at the local level, command inspectors general complete compliance inspections every two years, and the Marine Corps headquarters inspector general assesses local command inspectors general every three years to ensure they are effectively inspecting subordinate units. The Marine Corps headquarters inspector general also inspects those commands that do not have their own inspectors general every two years. According to the Office of the Marine Corps Inspector General, commanders are required to provide the Inspector General—within 30 days of its report—a plan for addressing any findings of non-compliance with the hazing policy. Further, a Marine Corps Manpower and Reserve Affairs official said that when commands are found to be out of compliance with the policy, officials conducting the inspections will assist them in taking steps to improve their hazing prevention program. Marine Corps officials told us that in the past 24 months, 3 of 33 commands inspected by the Marine Corps Inspector General were found to have non-mission-capable hazing prevention programs. They added that not having a mission-capable program does not necessarily indicate the existence of a hazing problem in the command. A Marine Corps Inspector General official said that local inspectors general may re-inspect commands within 60 days, and no longer than the next inspection cycle, to ensure they have made changes to comply with the hazing policy. An official from the Air Force Personnel Directorate stated that oversight is inherent in the requirement to comply with policy and that any violations would be captured through the regular investigative, inspector general, and equal opportunity processes, and potentially the military justice process. The official also added that it is ultimately a commander’s responsibility to ensure policy compliance. However, the Air Force has not established a mechanism that monitors implementation to help ensure commanders are consistently applying the policy. Similarly, officials from the Coast Guard’s Office of Military Personnel, Policy and Standards Division stated that they have not instituted a mechanism to monitor implementation of the Coast Guard’s hazing policy. During site visits to Naval Base Coronado and Marine Corps Base Camp Pendleton, we conducted nine focus groups with enlisted servicemembers and found that they were generally aware of some of the requirements specified in DOD’s and their respective service’s policies on hazing. For example, enlisted personnel in all nine focus groups demonstrated an understanding that hazing is prohibited and generally stated that they had received hazing awareness training. In addition, during our site visit to Naval Base Coronado, servicemembers in one focus group said that prior to a recent ceremony aboard the USS Carl Vinson, the ship’s commander provided all personnel aboard with command-specific guidance and training to raise their awareness of hazing. At Marine Corps Base Camp Pendleton, we identified multiple postings of hazing policy statements throughout various commands. We are encouraged by the actions taken at these two installations and we understand that there is a general expectation for commanders and other leaders in the military services and in the Coast Guard to help ensure compliance with policy. In addition, we note that the Marine Corps has implemented a means of monitoring hazing policy implementation throughout the service. However, without regular monitoring by DOD of the implementation of its hazing policy by the services, and without regular monitoring by all of the services of the implementation of their hazing policies, DOD and the military services will be unable to effectively identify issues and, when necessary, adjust their respective approaches to addressing hazing. Likewise, without regular monitoring by the Coast Guard of the implementation of its hazing policy, the Coast Guard will be unable to effectively identify issues and make adjustments to its approach to addressing hazing when necessary. As previously noted, DOD and military service policies generally define hazing and provide examples of prohibited conduct. However, based on our review of these policies, meetings with officials, and focus groups with servicemembers, we found that the military services may not have provided servicemembers with sufficient information to determine whether specific conduct or activities constitute hazing. According to the Standards for Internal Control in the Federal Government, management establishes standards of conduct, which guide the directives, attitudes, and behaviors of the organization in achieving the entity’s objectives. Each of the military services has defined hazing and provided training on the definition to servicemembers, but may not have provided sufficient clarification to servicemembers to help them make distinctions between hazing and generally accepted activities in the military, such as training and extra military instruction. To help servicemembers recognize an incident of hazing, DOD and military service policies provide a definition of hazing and include examples of rituals for servicemembers to illustrate various types of prohibited conduct. As noted previously, from 1997 to December 2015 DOD defined hazing as any conduct whereby a servicemember, without proper authority, causes another servicemember to suffer, or be exposed to any activity which is, among other things, humiliating or demeaning. According to this definition, hazing includes soliciting another to perpetrate any such activity, and can be verbal or psychological in nature. In addition, consent does not eliminate the culpability of the perpetrator. DOD’s 1997 hazing policy also listed examples such as playing abusive tricks; threatening violence or bodily harm; striking; branding; shaving; painting; or forcing or requiring the consumption of food, alcohol, drugs, or any other substance. The policy also noted that this was not an inclusive list of examples. Likewise, DOD’s revised December 2015 hazing definition includes both physical and psychological acts, prohibits soliciting others to perpetrate acts of hazing, states that consent does not eliminate culpability, and gives a non-inclusive list of examples of hazing. Headquarters-level officials from each military service stated that under the hazing definition a great variety of behaviors could be perceived as hazing. For example, Army officials said the definition encompasses a wide range of possible behaviors. Likewise, Marine Corps officials said that based on the definition included in its order, any activity can be construed as hazing. At our site visits, servicemembers in each focus group, as well as groups of non-commissioned officers, noted that perception plays a significant role in deciding whether something is hazing or not—that servicemembers may believe they have been hazed because they feel demeaned, for example. To distinguish hazing from other types of activities, DOD (in its 1997 hazing memorandum) and military service policies also provide examples of things that are not considered to be hazing, including command- authorized mission or operational activities, the requisite training to prepare for such missions or operations, administrative corrective measures, extra military instruction, command-authorized physical training, and other similar activities that are authorized by the chain of command. However, as DOD noted in its 2013 report to Congress on hazing, corrective military instruction has the potential to be perceived as hazing. DOD noted that military training can be arduous, and stated that hazing prevention education should distinguish between extra military instruction and unlawful behavior. DOD also stated that the services should deliberately incorporate discussion of extra military instruction, including proper administration and oversight, in contrast with hazing as part of prevention education. Conversely, a superior may haze a subordinate, and servicemembers therefore need to be able to recognize when conduct by a superior crosses the line into hazing. To raise awareness of hazing, each service has developed training that provides a general overview of prohibited conduct and the potential consequences. However, the training materials we reviewed did not provide servicemembers with information to enable them to identify less obvious incidents of potential hazing, such as the inappropriate or demeaning use of otherwise generally accepted corrective measures such as extra military instruction. Conversely, the training materials that we reviewed also did not include necessary information to help servicemembers recognize an appropriate use of corrective measures. Specifically, the training materials generally focused on clear examples of hazing behaviors, and did not illustrate where accepted activities such as training and discipline can cross the line into hazing. For example, the Army administers hazing awareness training for use at all levels that provides servicemembers with the definition of hazing and information about the circumstances under which hazing may occur, as well as a list of activities that are not considered hazing. However, our review found that the Army’s training materials do not provide information to servicemembers about how to make consistent determinations about whether an activity should be considered hazing, such as in cases that may resemble permitted activities. Likewise, the Navy’s training is designed to empower sailors to recognize, intervene, and stop various behaviors such as hazing that are not aligned with the Navy’s ethos and core values. However, our review found that the training focuses on intervening when an incident of hazing has occurred and does not include information to help servicemembers discern, for example, when a permissible activity is being used in an impermissible manner. The Marine Corps’ hazing awareness training is locally developed and examples of training materials we reviewed provide an overview of the definition of hazing, examples of acts that could be considered hazing similar to those delineated in the Marine Corps order governing hazing, and a list of potential disciplinary actions that could arise from a violation of the hazing order, among other things. However, our review found that the training materials do not provide servicemembers with information on activities that are not considered hazing, such as extra military instruction, or the necessary information to differentiate between permissible and non-permissible activities. In its 2013 report to Congress on Hazing in the Armed Forces, DOD similarly identified that it can be difficult to distinguish between corrective measures and hazing and noted that the services should incorporate a discussion of extra military instruction, to include proper administration and oversight, in contrast with hazing as part of prevention education. During our site visits to Naval Base Coronado and Marine Corps Base Camp Pendleton, three groups of non-commissioned officers reinforced the suggestion that hazing definitions are not sufficiently clear to facilitate a determination of which activities and conduct constitute hazing. The non-commissioned officers we met with generally agreed that the broad definition of hazing prevents them from effectively doing their jobs, including disciplining servicemembers, taking corrective action, or administering extra military instruction for fear of an allegation of hazing. For example, non-commissioned officers during one site visit said that a servicemember need only say “hazing” to prompt an investigation. During another site visit, a non-commissioned officer described one hazing complaint in which the complainant alleged hazing because the complainant’s supervisor had required that the complainant work late to catch up on administrative responsibilities. Although this complaint was later found to be unsubstantiated, the allegation of hazing required that resources be devoted to investigate the complaint. In addition, some noncommissioned officers we met with stated that they were concerned that the use of extra military instruction may result in an allegation of hazing. In our focus groups, enlisted servicemembers—over the course of both site visits—provided a range of possible definitions for hazing that further demonstrated the different interpretations of what constitutes prohibited conduct. For example, some defined hazing only in physical terms, whereas others recognized that hazing can be purely verbal or psychological as well. Some servicemembers believed that an incident would not be hazing if the servicemembers consented to involvement in the activity, although DOD and service policies state that actual or implied consent to acts of hazing does not eliminate the culpability of the perpetrator. In addition, consistent with the concerns expressed by some of the non-commissioned officers that we interviewed, servicemembers in two focus groups stated that they may perceive extra military instruction as hazing. By contrast, unit commanders and legal officials at one site visit stated that they believe that the existing definition of hazing provides supervisors with sufficient latitude to address misconduct. Standards for Internal Control in the Federal Government states that management establishes expectations of competence for key roles, and other roles at management’s discretion. Competence is the qualification to carry out assigned responsibilities, and requires relevant knowledge, skills, and abilities. It also states that management should internally communicate the necessary quality information to achieve the entity’s objectives. Without a more comprehensive understanding among servicemembers of the conduct and activities that warrant an allegation of hazing, servicemembers may not be able to effectively distinguish, and thus effectively identify and address, prohibited conduct. The Army, the Navy, and the Marine Corps track data on reported incidents of hazing. However, the data collected and the methods used to track them vary, and the data are therefore not complete and consistent. The Air Force does not have a method of specifically tracking hazing incidents, and the data it has generated on hazing incidents is also therefore not necessarily complete, or consistent with the other military services’ data. Likewise, the Coast Guard does not have a method of specifically tracking hazing incidents, and the data it has generated on hazing incidents is therefore not necessarily complete. Although it is difficult to determine the total number of actual hazing incidents, the military services’ data may not effectively characterize reported incidents of hazing because, for the time period of data we reviewed, DOD had not articulated a consistent methodology for tracking hazing incidents, such as specifying and defining common data collection requirements. As a result, there is an inconsistent and incomplete accounting of hazing incidents both within and across these services. Standards for Internal Control in the Federal Government state that information should be recorded and communicated to management and others who need it in a form and within a time frame that allows them to carry out their internal control and other responsibilities. In the absence of DOD-level guidance on how to track and report hazing incidents, the Army, the Navy, and the Marine Corps developed differing policies on hazing data collection and collected data on hazing incidents differently. For example, until October 2015 the Army only collected data on cases investigated by criminal investigators and military police, whereas the Navy collected data on all substantiated hazing incidents reported to commanders, and the Marine Corps collected data on both substantiated and unsubstantiated incidents. The Air Force and the Coast Guard hazing policies do not include a similar requirement to collect and track data on hazing incidents. In the absence of DOD guidance, the Air Force has taken an ad hoc approach to compiling relevant information to respond to requests for data on hazing incidents, and in the absence of Coast Guard guidance on tracking hazing incidents, the Coast Guard has also taken an ad hoc approach to compiling hazing data. For example, the Air Force queried its legal database for cases using variants of the word “hazing” to provide information on hazing incidents to Congress in 2013. Table 1 illustrates some of the differences in the services’ collection of data on hazing incidents and the total number of incidents for each service as reflected in the data for the time period we reviewed. However, due to the differences noted, data on reported incidents of hazing are not comparable across the services. Until September 2015, the Army’s primary tracking method for alleged hazing incidents was a spreadsheet maintained by an official within the Army’s Criminal Investigation Command, which included data on alleged hazing incidents that were recorded in a database of cases investigated by either military police or Criminal Investigation Command investigators, according to officials in the Army’s Equal Opportunity office. However, use of this database as the primary means of tracking hazing incidents limited the Army’s visibility over reported hazing incidents because it did not capture allegations handled by other Army offices, such as cases that are investigated by the chain of command or by the office of the inspector general. Data on hazing incidents through September 2015 are therefore not complete or consistent with the data from the other military services. Beginning in October 2015, the Army began to track hazing and bullying incidents in its Equal Opportunity Office’s Equal Opportunity Reporting System, but Army Equal Opportunity officials told us that they continue to have difficulties obtaining all needed information on hazing cases due to limitations in their ability to obtain information on hazing cases from commanders. The Navy requires that commands report all substantiated hazing incidents by sending a report to the headquarters-level Office of Behavioral Standards, where the information is entered into a spreadsheet that contains service-wide data received on reported hazing incidents. Officials in the Navy’s Office of Behavioral Standards told us that they encourage commanders to also report unsubstantiated incidents, but this is at the commanders’ discretion. The data on unsubstantiated incidents are therefore not necessarily comparable with those of services that require the collection and tracking of data on unsubstantiated incidents. Furthermore, as a result of the different types of data that are collected, reported numbers of hazing incidents may not be consistently represented across the services. Since May 2013, the Marine Corps has required that commanders coordinate with their local Equal Opportunity Advisor to record substantiated and unsubstantiated allegations of hazing in the Marine Corps’ Discrimination and Sexual Harassment database. While the Marine Corps’ tracking method is designed to capture all hazing allegations of which a unit commander is aware, we found that the methods used by the service to count cases, offenders, and victims have not been consistent. For example, our analyses of these data identified inconsistencies over time in the method of recording hazing cases. Specifically, we found that in some instances, a reported hazing incident involving multiple offenders or victims was counted as a separate case for each offender-victim pair. In other instances, the incident was counted as a single case even when it involved multiple offenders or victims. So, for example, an incident involving 2 alleged offenders and 4 alleged victims was counted as 8 incidents, and another with 3 alleged offenders and 3 alleged victims was counted as 9 incidents. On the other hand, we found an example of a case with 4 alleged offenders and 1 alleged victim being counted as a single case, and another with 2 alleged offenders and 2 alleged victims counted as a single case. The recording of incidents in the Marine Corps is therefore not internally consistent or consistent with the other military services. As previously noted, the Air Force does not require that data be collected or tracked on reported incidents of hazing, which has complicated its ability to efficiently provide data on hazing incidents when they are requested. To produce the congressionally-mandated report on hazing incidents reported in fiscal year 2013, the Air Force performed a keyword search of its legal database for variants of the word “hazing.” However, given that the database is used and maintained by legal personnel, query results only captured cases that came to the attention of a judge advocate. Further, while the keyword search of its database identified some incidents, the Air Force does not require that the term “hazing” or any of its variants be included in the case narrative, even if the case involved hazing. An official of the Air Force Legal Operations Agency told us that judge advocates focus on the articles of the UCMJ, and depending on the circumstances, they may or may not consider the context of hazing to be relevant information to record in the file. Given that “hazing” is not specifically delineated as an offense in the UCMJ, documented incidents of hazing in the Air Force fall under various UCMJ articles, such as Article 92 on Failure to Obey an Order or Regulation and Article 128 on Assault, and may not identify the incident as hazing. Consequently, Air Force officials stated that queries of the legal database would not necessarily capture all reported hazing cases across the Air Force. The Air Force’s data on hazing incidents are also therefore not necessarily complete or consistent with the other military services’ data. The Coast Guard also has not established a requirement to collect and track data on reported incidents of hazing, which has complicated its ability to efficiently provide data on hazing incidents when they are requested. As with the Air Force, the Coast Guard’s current process of compiling data on hazing cases has complicated its ability to efficiently provide data on hazing incidents when they are requested, according to Coast Guard officials. For example, to produce the congressionally- mandated report on hazing incidents reported in fiscal year 2013, the Coast Guard queried its database of criminal investigations as well as its database of courts-martials for variants of the term “hazing.” According to Coast Guard officials, the Coast Guard’s queries only captured cases that explicitly used a variant of the term “hazing” in the case narrative and that were investigated by the Coast Guard Investigative Service or had resulted in a court-martial. As such, the Coast Guard’s data did not capture, for example, any cases that may have been investigated by the chain of command and deemed unsubstantiated or resolved through administrative action or non-judicial punishment. The military services’ and the Coast Guard’s available information on hazing cases include some information on the dispositions of hazing cases, which have been adjudicated in a variety of ways. Our review of the data showed that this information was not always available or updated, and the sources of the information were not always reliable. We therefore found that data on hazing case dispositions were not sufficiently reliable to report in aggregate. There were a wide range of dispositions, from cases being found unsubstantiated to courts-martial. For example, in one case, multiple servicemembers pled guilty at court-martial to hazing and assault consummated by battery after being accused of attempted penetrative sexual assault. In another hazing case involving taping to a chair, the offender was punished through non-judicial punishment with restriction, extra duty, and forfeiture of pay and the victim was given a similar but lesser punishment for consenting to the hazing. In a third case, a complainant alleged hazing after being told to work late, but an investigation determined that the allegation was unsubstantiated. ODMEO officials acknowledged that it is difficult to gauge the scope and impact of hazing given the limited information that is currently available and the inconsistent nature of the services’ data collection efforts. DOD’s updated hazing policy includes requirements that are intended to promote greater consistency in the services’ collection of data on reported hazing incidents. Specifically, the revised policy includes a requirement for the services to collect data on the number of substantiated and unsubstantiated incidents of hazing and bullying, as well as the demographics of the complainant and alleged offender in each case, a description of the incident, and if applicable, disposition of the case. ODMEO officials said they plan to provide a data collection template that will provide a standard list of data elements and additional details on the data to be collected and reported to ODMEO. DOD’s updated hazing policy will help to improve the consistency of hazing incident data collected by the services. However, it does not appear that the policy will serve to make the services’ disparate data collection efforts fully consistent because the policy does not clearly define the scope of information or define the data to be collected. For example, the policy requires the military services to track hazing incidents, but does not identify how to count an incident relative to the number of alleged offenders and alleged victims, and the services have counted incidents differently for tracking purposes. ODMEO officials said they are continuing to revise the data collection template, which could provide further specificity to the data collection. As a result of inconsistent and incomplete data, DOD and the Coast Guard cannot provide an accurate picture of reported hazing incidents either for the purposes of internal management or for external reporting. Further, without a common basis to guide the collection of data, including a standard list of data elements, decision makers in DOD, the Coast Guard, and Congress will not be able to use these data to determine the number of reported hazing incidents in DOD or the Coast Guard, or to draw conclusions from the data. To date, DOD and the Coast Guard do not know the extent of hazing in their organizations because they have not conducted an evaluation of the prevalence of hazing. In contrast to the limited data on reports of hazing incidents, information on the prevalence of hazing would help DOD and the Coast Guard to understand the extent of hazing beyond those incidents that are reported. The prevalence of hazing could be estimated based on survey responses, as DOD does in the case of sexual assault. We believe such an evaluation could form the baseline against which to measure the effectiveness of their efforts to address hazing and would enhance visibility over the prevalence of such misconduct. Standards for Internal Control in the Federal Government states that it is important to establish a baseline that can be used as criteria against which to assess progress and to help identify any issues or deficiencies that may exist. ODMEO officials said that their efforts to address hazing are in the early stages and that following the issuance of the updated hazing policy, DOD may begin to establish a baseline against which to evaluate appropriate responses to hazing. However, to date DOD and the military services have not evaluated the prevalence of hazing across their organizations in order to determine the appropriate responses. The Coast Guard also has not evaluated the prevalence of hazing within its service. Officials in each of the military services and the Coast Guard told us that reports of hazing incidents are currently the primary indicator used to gauge the incidence of hazing. However, as previously noted, the data that are currently collected on hazing incidents are neither complete or consistent, and data obtained through other sources, such as surveys, suggest that hazing may be more widespread in the military services and the Coast Guard than the current numbers of reports indicate. In particular, the RAND Corporation conducted a survey on sexual assault and sexual harassment in the military for DOD in 2014, the results of which indicate that the actual number of hazing incidents may exceed the number of reported incidents tracked by the services. Based on our analysis of RAND’s survey results, we estimate that in 2014, about 11,000 male servicemembers in the Army, the Navy, the Marine Corps, and the Air Force were sexually assaulted. Of these, RAND estimated that between 24 percent and 46 percent would describe their sexual assaults as hazing (“things done to humiliate or ‘toughen up’ people prior to accepting them in a group”). Officials from DOD and the Coast Guard told us that hazing and sexual assault can occur as part of the same incident, but it will be documented and addressed based on the more egregious offense—in this case, sexual assault. We recognize that the classification of an offense is key in that it directly corresponds to the punitive actions that can be taken, but note that this further reinforces that there may be a broader incidence of hazing than the data currently collected by the military services and the Coast Guard indicate. In addition to the results of RAND’s survey, we also obtained and analyzed the results of organizational climate surveys for each of the military services and the Coast Guard for calendar year 2014 and determined that some servicemembers perceive that hazing occurs in their units despite the policies in place prohibiting hazing. Commanders throughout the military services and the Coast Guard are required—at designated intervals—to administer organizational climate surveys to members of their respective units. These surveys are designed to evaluate various aspects of their unit’s climate, including, among other things, sexual assault and sexual harassment, and were recently revised to include questions that solicit servicemember perspectives on the incidence of hazing. Specifically, in 2014, the Defense Equal Opportunity Management Institute—the organization responsible for administering the surveys—began including questions related to hazing and demeaning behaviors in the organizational climate surveys it administers for commands throughout the military services and the Coast Guard. Each question asked whether respondents strongly disagreed, disagreed, agreed, or strongly agreed with a statement intended to measure either hazing or demeaning behaviors. Table 2 shows the statements in the organizational climate surveys about hazing and demeaning behaviors. These surveys do not measure the prevalence of hazing. Instead, they measure the extent to which servicemembers perceive that hazing (and demeaning behaviors) occurs in their units. In addition, the organizational climate surveys were designed to be a tool for commanders to evaluate their individual units as opposed to aggregate-level analyses; thus, the data have limitations when used for aggregate-level analysis. The results of these surveys are also not generalizable, in part because the Army requires that command climate surveys be conducted more frequently than is required by the other services. As such, Army responses are overrepresented relative to the other military services when results are aggregated. Finally, survey data may reflect other errors, such as differences in how questions are interpreted. Since demographic information is gathered through self-selection, breaking down the results into specific subgroups may introduce additional error. Despite these limitations, analysis of these data yields insight into perceptions of hazing within and across the services. Table 3 shows the results of our analysis of data from these organizational climate surveys administered by the Defense Equal Opportunity Management Institute for servicemembers in active-duty units in the Army, Navy, Marine Corps, Air Force, and Coast Guard for 2014 on hazing and demeaning behaviors. As shown in table 3, about 12 percent of responses by enlisted servicemembers in active-duty units at the E1-E3 pay grades agreed with all three statements about hazing (noted in table 3, above) and about 18 percent of responses at these pay grades agreed with all three statements about demeaning behaviors. These percentages dropped to about 8 percent and 14 percent, respectively, at the E4-E6 levels, and continued to drop, reaching about 1 percent for hazing and 2 percent for demeaning behaviors for officers at the O4-O6 level. These responses indicate that perceptions of the extent of hazing and demeaning behaviors in the military services and in the Coast Guard may be different between those at the lower and middle enlisted ranks and those with responsibility for developing or enforcing policy. The data also show that perceptions of hazing may differ by service. For hazing, about 9 percent of Army responses agreed with all three statements; about 5 percent of Navy responses agreed with all three statements; about 11 percent of Marine Corps responses agreed with all three statements; and about 2 percent of responses in the Air Force and Coast Guard agreed with all three statements. Likewise, for demeaning behaviors, about 14 percent of Army responses agreed with all three statements; about 9 percent of Navy responses agreed with all three statements; about 15 percent of Marine Corps responses agreed with all three statements; and responses from the Air Force and Coast Guard came in at about 5 percent in agreement with all three statements for each service. The results of such analyses indicate that sufficient numbers of servicemembers perceive hazing to be occurring to warrant evaluation of the prevalence of hazing. In addition, such survey data can provide valuable insights that can be used by military leaders to help form a baseline of information. For example, the services could use the results to evaluate service-wide as well as command-specific perceptions of hazing, compare how perceptions change over time, make comparisons with incident rates, and perform other analyses to identify trends and areas needing improvement. Standards for Internal Control in the Federal Government states that management analyzes identified risks to estimate their significance, which provides a basis for responding to the risks. Management estimates the significance of a risk by considering the magnitude of impact, likelihood of occurrence, and the nature of the risk. In addition, according to leading practices for program evaluations, evaluations can play a key role in planning and program management by providing feedback on both program design and execution. However, DOD and the military services have not evaluated the extent of hazing in their organizations or the magnitude of its impact or likelihood of occurrence, in order to effectively target their responses to hazing. Likewise, the Coast Guard has not evaluated the extent of hazing in the Coast Guard. Without doing so, the services may be limited in their ability to further develop and target their efforts in such a way as to have the maximum positive effect for the most efficient use of resources. Incidents of hazing in DOD and the Coast Guard can have effects that extend beyond their victims and perpetrators, undermining unit cohesion and potentially reducing operational effectiveness as a consequence. At the service-wide level, high-profile hazing incidents can shape public perceptions, potentially making recruitment and retention more challenging. Both DOD and the Coast Guard have issued policies that prohibit hazing. However, DOD issued its earlier hazing policy in 1997, and despite several hazing incidents coming to public attention in recent years, DOD and the Coast Guard do not regularly monitor implementation of their hazing policies and do not know the extent of hazing in their organizations. Without effective monitoring by DOD, the Coast Guard, and each of the services, the offices with responsibility for addressing hazing will not know whether hazing prevention policies and training are being consistently implemented. In addition, servicemembers may not sufficiently understand how to recognize and respond to hazing incidents. As our discussions with groups of servicemembers and officials suggest, there may be confusion that persists. Without providing additional clarification to servicemembers, perhaps through revising and tailoring training or providing more communication, servicemembers may be limited in their ability to carry out their responsibilities, such as recognizing hazing and enforcing discipline. At the same time, if they do not fully understand the hazing policies, hazing victims may not be able to recognize hazing when it occurs, including hazing by those in positions of authority. DOD’s and the Coast Guard’s efforts to reduce hazing would also benefit from a better understanding of the extent of hazing incidents. Available data do not provide a complete picture of the extent of reported hazing incidents. Without consistent and complete tracking of hazing incidents within and across the services, decision makers will not be able to identify areas of concern and target resources appropriately. Achieving such visibility over hazing incidents depends on better data, which will not be available without guidance specifying that the services should track all reported hazing incidents, with standardized and defined data elements that will facilitate the accurate tracking of reported hazing incidents. Concurrent with better data, DOD and the Coast Guard need to evaluate the prevalence of hazing in their organizations, since the data on reported incidents alone will not provide a picture of the full extent of hazing in the armed forces. Without such an evaluation, decision makers will not be positioned to appropriately tailor their response or to judge progress in their efforts. We recommend that the Secretary of Defense take the following seven actions: To enhance and to promote more consistent oversight of efforts within the department to address the incidence of hazing, direct the Under Secretary of Defense for Personnel and Readiness to: regularly monitor the implementation of DOD’s hazing policy by the military services; and require that the Secretaries of the military departments regularly monitor implementation of the hazing policies within each military service. To improve the ability of servicemembers to implement DOD and service hazing policies, direct the Under Secretary of Defense for Personnel and Readiness to establish a requirement for the Secretaries of the military departments to provide additional clarification to servicemembers to better inform them as to how to determine what is or is not hazing. This could take the form of revised training or additional communications to provide further guidance on hazing policies. To promote greater consistency in and visibility over the military services’ collection of data on reported hazing incidents and the methods used to track them, direct the Under Secretary of Defense for Personnel and Readiness, in coordination with the Secretaries of the military departments, to issue DOD-level guidance on the prevention of hazing that specifies data collection and tracking requirements, including the scope of data to be collected and maintained by the military services on reported incidents of hazing; a standard list of data elements that each service should collect on reported hazing incidents; and definitions of the data elements to be collected to help ensure that incidents are tracked consistently within and across the services. To promote greater visibility over the extent of hazing in DOD to better inform DOD and military service actions to address hazing, direct the Under Secretary of Defense for Personnel and Readiness, in collaboration with the Secretaries of the Military Departments, to evaluate prevalence of hazing in the military services. We recommend that the Commandant of the Coast Guard take the following five actions: To enhance and to promote more consistent oversight of the Coast Guard’s efforts to address the incidence of hazing, regularly monitor hazing policy implementation. To promote greater consistency in and visibility over the Coast Guard’s collection of data on reported hazing incidents and the methods used to track them, by issuing guidance on the prevention of hazing that specifies data collection and tracking requirements, including the scope of the data to be collected and maintained on reported incidents of hazing; a standard list of data elements to be collected on reported hazing definitions of the data elements to be collected to help ensure that incidents are tracked consistently within the Coast Guard. To promote greater visibility over the extent of hazing in the Coast Guard to better inform actions to address hazing, evaluate the prevalence of hazing in the Coast Guard. We provided a draft of this report to DOD and DHS for review and comment. Written comments from DOD and DHS are reprinted in their entirety in appendixes IV and V. DOD and DHS concurred with each of our recommendations and also provided technical comments, which we incorporated in the report as appropriate. In its written comments, DOD concurred with the seven recommendations we directed to it, and made additional comments about ways in which its newly issued December 2015 hazing policy memorandum takes actions toward our recommendations. Among other things, the new hazing policy assigns authority to the Under Secretary for Personnel and Readiness to amend or supplement DOD hazing and bullying policy, requires training on hazing and bullying for servicemembers, and requires tracking of hazing incidents, but in itself does not fully address our recommendations. Regarding our recommendation for the Under Secretary of Defense for Personnel and Readiness to regularly monitor the implementation of DOD’s hazing policy by the military services, DOD stated that its December 23, 2015 updated hazing policy memorandum provides comprehensive definitions of hazing and bullying, enterprise-wide guidance on prevention training and education, as well as reporting and tracking requirements. We agree that these are important steps to address hazing in the armed services. However, the policy does not specifically require the Under Secretary of Defense for Personnel and Readiness to regularly monitor the implementation of DOD’s hazing policy, and we continue to believe that the Under Secretary of Defense for Personnel and Readiness should monitor the implementation of DOD’s hazing policy to ensure its requirements are implemented throughout the military services. With respect to our recommendation to establish a requirement for the secretaries of the military departments to provide additional clarification to servicemembers to better inform them as to how to determine what is or is not hazing, DOD stated that its December 2015 updated hazing policy memorandum directs the military departments to develop training that includes descriptions of the military departments' hazing and bullying policies and differentiates between what is or is not hazing and bullying. We are encouraged by DOD’s efforts to integrate the recommendation into its policy requirements and believe the services will benefit by incorporating these requirements into their hazing prevention activities. Regarding our recommendations to issue DOD-level guidance that specifies data collection and tracking requirements for hazing incidents, including the scope of data to be collected and maintained by the military services on reported incidents of hazing and a standard list of data elements that each service should collect on reported hazing incidents, DOD stated that its December 2015 updated hazing policy memorandum provides guidance and requirements for tracking and reporting incidents of hazing and bullying. We believe that the incident data tracking requirements in this policy are an important step for DOD to improve its data collection on hazing incidents. As noted in our report, the updated policy memorandum will not fully address disparities in service-specific data collection efforts until DOD and the services clearly define the scope of information or define the data to be collected. For example, the hazing policy requires the services to track hazing incidents, but does not identify how to count an incident relative to the number of alleged offenders and alleged victims, and the services have counted incidents differently for tracking purposes. As we note in the report, DOD plans to provide a data collection template to the services, and this could provide a vehicle for fully addressing these recommendations. In its written comments, DHS concurred with the five recommendations we directed to the Coast Guard, and made additional comments about steps the Coast Guard will take to address our recommendations. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Homeland Security, the Under Secretary of Defense for Personnel and Readiness, the Secretaries of the Army, the Navy, and the Air Force, and the Commandants of the Marine Corps and the Coast Guard. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To determine the extent to which the Department of Defense (DOD) and the Coast Guard have developed and implemented policies to address hazing incidents, we reviewed DOD’s 1997 hazing memorandum, its December 2015 updated hazing and bullying policy memorandum, and the hazing policies of each military service and the Coast Guard. We compared the policies, definitions of hazing, and oversight and training requirements to determine similarities and differences. To better understand the hazing policies and guidance from each service, including the Coast Guard, we interviewed knowledgeable officials from the Office of Diversity Management and Equal Opportunity in the Office of the Under Secretary of Defense for Personnel and Readiness, the Army Equal Opportunity Office, the Navy Office of Behavioral Standards, the Marine Corps Office of Manpower and Reserve Affairs, the Air Force Personnel Directorate, and the Coast Guard Office of Military Personnel, Policy and Standards Division, as well as officials in other offices listed in table 4, below. In addition, we reviewed the services’ hazing awareness training requirements included in their respective policies and analyzed the services’ training materials to determine how servicemembers are trained on hazing awareness, prevention, and response. We also interviewed or requested information from officials responsible for developing training from the Army Training and Doctrine Command, Naval Education and Training Command, Marine Corps Training and Education Command, Air Force Personnel Directorate, and the Coast Guard Fleet Forces Command and Leadership Development Center. To better understand the reporting and response mechanisms employed by DOD and the Coast Guard, as well as the approaches in each service for responding to allegations of hazing as well as applications of the Uniform Code of Military Justice (UCMJ), court-martial, non-judicial punishment, and administrative action, we reviewed relevant policies and interviewed cognizant officials from the Army Office of the Provost Marshal General and Criminal Investigation Command, Naval Criminal Investigative Service, Marine Corps Judge Advocate Division and Inspector General, Air Force Office of Special Investigations, Security Forces Directorate, Legal Operations Agency, and Inspector General, and the Coast Guard Office of the Judge Advocate General and the Coast Guard Investigative Service. To better understand how policy and training is implemented at installations, and to obtain servicemember perspectives on hazing and hazing awareness training, we conducted site visits to Naval Base Coronado, California, and Marine Corps Base Camp Pendleton, California. We selected these sites based upon reported hazing data, media reports of hazing, data on male victims of sexual assault, and geographic proximity to each other. During these site visits we conducted nine focus groups with enlisted servicemembers in grades E-3 through E-5 that included a self-administered pen and paper survey of all participants. We selected these grades because available data on reported hazing incidents indicated that these grades were most likely to be victims or perpetrators of a hazing incident. In addition, we met with groups of noncommissioned officers (grades E-6 through E-9), commanding officers, inspectors general, equal opportunity advisors, staff judges advocates, and chaplains to obtain perspectives of servicemembers and other officials that may be involved in addressing hazing. For further information about the focus group and survey methodology, see appendix III. We compared the extent to which DOD and each armed service has oversight mechanisms in place to monitor the implementation of hazing policies to the Standards for Internal Control in the Federal Government criteria on control activities, which include the policies, procedures, techniques, and mechanisms that enforce management’s directives to achieve an entity’s objectives. We also compared the extent to which guidance to servicemembers provides enough clarity to determine when hazing has occurred to the Standards for Internal Control in the Federal Government criteria that state that management establishes standards of conduct that guide the directives, attitudes, and behaviors of the organization in achieving the entity’s objectives, as well as Standards for Internal Control in the Federal Government criteria that state that management establishes expectations of competence for key roles, and other roles at management’s discretion and that management should internally communicate the necessary quality information to achieve the entity’s objectives. To determine the extent to which DOD and the Coast Guard have visibility over hazing incidents involving servicemembers, we reviewed the DOD and Coast Guard hazing policies noted above to identify any tracking requirements. To determine the number of reported hazing incidents and the nature of these incidents, we reviewed available data on reported hazing allegations from each service covering a two-year time period. The Army, Navy, Air Force, and Coast Guard data covered the period from December 2012 through December 2014. The Marine Corps database for tracking hazing incidents began tracking in May 2013, so we analyzed Marine Corps data from May 2013 through December 2014. We reviewed the methods each service used to track hazing incident data by interviewing officials from the Army Equal Opportunity Office and the Army Criminal Investigation Command; the Navy Office of Behavioral Standards; the Marine Corps Office of Manpower and Reserve Affairs; the Air Force Personnel Directorate and Air Force Legal Operations Agency; and the Coast Guard Office of Military Personnel, Policy and Standards Division and the Coast Guard Investigative Service. We found that the Army and Navy data were sufficiently reliable to report the number of hazing cases, offenders, and victims, as well as demographic and rank data on offenders and victims. However, due to limitations in the methods of collection, the data reported do not necessarily represent the full universe of reported hazing incidents in the Army and Navy. We found that the Marine Corps data was not sufficiently reliable to report accurate information on the total number of cases, offenders, and victims, or demographic and rank data. The Marine Corps did not record the number of hazing cases in an internally consistent manner, resulting in duplicate records for cases, offenders, and victims, and no consistent means for correcting for the duplication. We found that the Air Force data were sufficiently reliable to report the number of cases and offenders, but not to report demographic information for the offenders or to report any information on the victims because it did not consistently track and report demographic and rank information. We also found that the Coast Guard data were sufficiently reliable to report the number of cases, offenders, and victims, but not to report demographic and rank information because it did not consistently track and report demographic and rank information. In addition, due to limitations of the collection methods, the data reported do not necessarily represent the full universe of reported hazing incidents in the Air Force and Coast Guard. We found that hazing data in all services were not sufficiently reliable to report information on the disposition of hazing cases because they did not consistently track and report this information, and because the source data for these dispositions was not reliable. We also compared the services’ methods of data collection with Standards for Internal Control in the Federal Government criteria stating that information should be recorded and communicated to management and others who need it in a form and within a time frame that allows them to carry out their internal control and other responsibilities. We also reviewed the 2014 RAND Corporation military workplace study commissioned by the Office of the Secretary of Defense and analyzed data reported on that study on sexual assault and hazing. We also interviewed officials of the Defense Equal Opportunity Management Institute about command climate surveys and analyzed data obtained from responses to command climate survey questions relating to hazing and demeaning behaviors. We obtained survey data based on three hazing questions and three demeaning behavior questions that were asked of all survey respondents during calendar year 2014; in addition, we obtained survey data for demographic and administrative variables that we used to analyze the data across all of the command climate surveys we obtained. The data we analyzed included responses by active-duty servicemembers in all five armed services—Army, Navy, Marine Corps, Air Force, and Coast Guard—during calendar year 2014. We summarized the results for active-duty servicemembers by rank, gender, race/ethnicity, and by service across all of the command climate survey responses that were collected for the time period. Because of the nature of the process used to administer and to collect the results of the command climate surveys, the analysis cannot be generalized to the entire population of active servicemembers across the armed forces or for each service. For example, it is not possible to discern whether every unit administered the command climate survey, nor whether any particular unit administered the survey multiple times within the time period from which we obtained data. Therefore, the analyses we present using the command climate survey data are not intended to reflect precise information about the prevalence of perceptions related to hazing, but rather to demonstrate how the survey data might be used if the methods allowed the ability to generalize to all servicemembers. We compared the extent to which DOD and the Coast Guard have evaluated the prevalence of hazing with Standards for Internal Control in the Federal Government criteria on evaluating risks, and with leading practices for program evaluations. In addition to these organizations, we also contacted the RAND Corporation. We conducted this performance audit from April 2015 to February 2016 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Not all of the military services or the Coast Guard track data on reported hazing. Further, the data that are collected and the methods used to track them vary by service because neither the Department of Defense (DOD) or the Coast Guard has articulated a consistent methodology. As a result of inconsistent and incomplete data, any data tracked and reported by the armed services currently cannot be used to provide a complete and accurate picture of hazing in the armed services, and the data from one service cannot be compared to that of another service. To the extent possible based on the availability of data, we obtained and reviewed data on reported hazing cases from each military service covering the period December 2012 to December 2014. For the Air Force and Coast Guard, neither of which specifically tracked hazing cases, we obtained information derived from legal and criminal investigative databases, which were the methods these services used to report hazing information to congressional committees in 2013. The following information is derived from our analyses of these data. The Army specifies the use of its Equal Opportunity Reporting System database to track hazing cases. However, the Army only began using its equal opportunity database to track hazing cases in October 2015. Previously hazing cases were tracked by Army Criminal Investigation Command. Criminal Investigation Command tracked cases using its database of cases investigated by Criminal Investigation Command and by military police, so these data necessarily exclude cases that were not investigated by Criminal Investigation Command or military police. Figure 2 shows our analysis of the Army’s hazing cases from December 2012 through December 2014. NOTE: Data are from December 2012 through December 2014.These data only include allegations investigated by military police or criminal investigators. We excluded from the above data one case with one alleged offender and an unknown number of alleged victims due to the absence of a precise number of victims. Enlisted grades begin at E1 (lowest grade), and officer grades begin at O1. As shown in Figure 2, during this time period the Army identified a total of 17 alleged cases involving 93 alleged offenders and 47 alleged victims. The majority of alleged offenders and alleged victims were either in grades E4-E6 or E1-E3, and more alleged offenders were E4-E6 than E1- E3, while more alleged victims were E1-E3 than E4-E6. A majority of alleged offenders and alleged victims were male. Most alleged victims and alleged offenders were white, non-Hispanic, but the race and ethnicity information for some alleged offenders and alleged victims was unknown. The Navy requires commanders to report substantiated hazing cases to the Office of Behavioral Standards, which then tracks the cases in a spreadsheet. Although Navy policy only requires substantiated cases to be reported, officials in the Navy’s Office of Behavioral Standards told us they encourage commanders to report both unsubstantiated and substantiated cases, and the data include both, to the extent reported. Figure 3 shows our analysis of this data from December 2012 through December 2014. NOTE: Data are from December 2012 through December 2014. These data include some unsubstantiated cases; however, Navy policy only requires substantiated cases to be reported, so the data may not include all unsubstantiated cases. Ten cases are excluded from the above data due to the inclusion of an unknown number of alleged offenders or alleged victims. These cases included 5 known alleged offenders and 7 known alleged victims. From FY13 to FY14, the Navy switched its method of recording race and ethnicity. In FY13, the Navy included “Hispanic” as one category among other racial/ethnic categories; beginning in FY14, it began tracking race and ethnicity separately. Beginning in FY14 the Navy data record some cases where it was unknown whether the alleged victim or offender was Hispanic—82 alleged offenders and 65 alleged victims of unknown ethnicity in total. Therefore, all racial/ethnic categories not specifically marked as Hispanic could include Hispanics in the data above. Enlisted grades begin at E1 (lowest grade), and officer grades begin at O1. As shown in Figure 3, during this time period the Navy identified 63 alleged hazing cases, involving 127 alleged offenders and 97 alleged victims. The majority of alleged offenders were in grades E4-E6, while the majority of alleged victims were either E1-E3 or E4-E6. Alleged offenders were overwhelmingly male, while alleged victims included a significant minority of women. In terms of race and ethnicity, the greatest single group of both alleged offenders and alleged victims was white, non- Hispanic. The Marine Corps uses its Discrimination and Sexual Harassment database to track alleged hazing incidents, both substantiated and unsubstantiated. We obtained and analyzed data from May 2013, when the Marine Corps began using this tracking method, through December 2014. We found internal inconsistencies in the Marine Corps’ tracking data, and for that reason found that the data were not reliable enough to report detailed information about these alleged hazing cases. Specifically, from May 2013 through December 2014, the Marine Corps recorded 303 alleged hazing cases for which there were 390 alleged victims and 437 alleged offenders. However, our analyses of these data identified inconsistencies in the methods used to aggregate categories of information collected on reported incidents of hazing. For example, we found that in some instances, a reported hazing case involving two alleged offenders and one alleged victim was counted as a single case, whereas other instances that involved the same number of individuals were classified as two cases—one for each alleged offender. Similarly, we identified single reports of hazing that involved multiple alleged victims and were classified as one case that, at other times, were documented as separate cases relative to the number of alleged victims involved. We determined that the Marine Corps’ data, for the time period requested, were overstated by at least 100 reported hazing cases, at least 50 alleged offenders, and at least 90 alleged victims. The Air Force has not established a system specifically to track hazing cases. In its July 2013 report to congressional committees, Hazing in the Armed Forces, the Air Force stated that hazing incidents in the service are best tracked using its legal database by querying the text of the cases for variants of the word “hazing.” Accordingly, we obtained information on hazing cases from December 2012 through December 2014 from a search performed in this database for variants of the word “hazing,” the results of which were provided to us by the Air Force Legal Operations Agency. This data showed 4 cases with 17 alleged offenders that were reported from December 2012 through December 2014. However, these data do not present a complete picture of hazing cases in the Air Force, as they do not necessarily capture any cases that did not come to the attention of a staff judge advocate. The case files did not generally capture race or ethnicity data for alleged offenders and alleged victims; did not systematically capture gender of alleged offenders and alleged victims; generally did not capture the rank of alleged victims; and did not systematically capture the number of alleged victims. Therefore, we are not reporting rank or demographic data. The Coast Guard has not established a system specifically to track hazing cases. In its 2013 report to congressional committees, Hazing in the Coast Guard, the Coast Guard reported hazing incidents derived from legal and criminal investigative sources. Accordingly, to obtain data on Coast Guard hazing incidents, we used the Coast Guard’s Quarterly Good Order and Discipline Reports, which contain a summary of disciplinary and administrative actions taken against Coast Guard military members or civilian employees, as well as Coast Guard Investigative Service case files. For the Good Order and Discipline reports covering disciplinary and administrative actions taken between October 2012 and March 2015, only one case explicitly mentioned hazing. However, these reports only include brief descriptions for certain types of cases, such as courts-martial, and do not include any details of the alleged offense and punishment for cases resulting in non-judicial punishment. In response to our request to identify Coast Guard Investigative Service cases using variants of the word “hazing” from December 2012 through December 2014, the Coast Guard identified six cases involving 14 known alleged victims and 20 known alleged offenders (the number of both offenders and victims in one case were unknown). These case files did not consistently track and report the race, ethnicity, rank, and gender of the offenders and victims; therefore we are not reporting rank or demographic data. Due to the limitations of these methods of capturing reported hazing cases, these data do not necessarily present a complete picture of the number of reported hazing incidents in the Coast Guard. In addition, Coast Guard officials told us that conducting this search for case file information was time- and resource-consuming, and even with this allocation of time and resources the results of the judicial and investigative information sources may not yield complete information on reported hazing cases in the Coast Guard. To obtain servicemembers’ perspectives related to each of our objectives, we conducted nine focus group meetings with active-duty servicemembers in the grades E3-E5. Four of these meetings were held at Marine Corps Base Camp Pendleton, California, and five meetings were held at U.S. Naval Base Coronado, California. We selected these sites based upon reported hazing data, media reports of hazing, data on male victims of sexual assault, and geographic proximity to each other. To select specific servicemembers to participate in our focus groups, we requested lists of servicemembers who were stationed at each location and likely available to participate at the time of our visit. The documentation included information about their rank, gender, and occupation. (Navy) Petty Officer Taylor is on his first deployment to the South Pacific. His fellow shipmates have told him about an upcoming ceremony to celebrate those crossing the equator for the first time. The day of the equator crossing, all shipmates (“shellbacks and wogs”) dress up in costume. The wogs, or those who are newly crossing the equator, rotate through different stations, including tug-of-war and an obstacle course. One of the shellbacks, or those who have already crossed the line, is dressed up as King Neptune and asks the wogs to kiss his hands and feet. In addition, all of the “wogs” are required to take a shot of tequila. After completing all the stations and crossing the equator, Petty Officer Taylor is officially a shellback. (Marine Corps) Lance Corporal Jones recently received a promotion to Corporal. To congratulate him for the promotion, members of his unit take him to the barracks and begin hitting him at the spot of his new rank. (Navy) After dinner, Petty Officer Sanchez talks with fellow sailors about playing some pranks on other members of the ship. They see Seaman Williams walking down the hall and bring him into a storage closet. There, they tape his arms and legs to a chair and leave him alone in the closet to see if he can escape. (Marine Corps) After dinner, Sergeant Sanchez talks with fellow marines about playing some pranks on other members of the platoon. They see Corporal Williams walking down the hall and bring him into a storage closet. There, they tape his arms and legs to a chair and leave him alone in the closet to see if he can escape. These scenarios, providing examples of hazing, along with the following set of questions, were the basis for the discussion with participants and the context for responding to the survey questions that were administered following the discussion. Would you consider this example hazing? Do activities like these two examples sound like they could ever happen in the Marine Corps/Navy? What about these activities is good? What about these activities might be harmful? Do you think activities like these are important for a Marine/Sailor to become a part of the group or the unit? Now that we’ve talked about hazing, what kind of training about hazing have you received in the Marine Corps/Navy? Are there any other topics about hazing that we haven’t covered? To obtain additional perspectives on hazing, particularly regarding sensitive information about personal experience with hazing, servicemembers participating in each focus group completed a survey following the discussion. The survey consisted of a self-administered pen and paper questionnaire that was provided to each focus group participant in a blank manila envelope without any identifying information. The moderator provided the following verbal instructions: I’d like you to take a few minutes to complete this survey before we finish. Please do not put your name or any identifying information on it. Take it out of the envelope, take your time and complete the questions, and please place it back in the envelope. When you are done, you can leave it with me/put it on the chair and then leave. Because we did not select participants using a statistically representative sampling method, the information provided from the surveys is nongeneralizable and therefore cannot be projected across the Department of Defense, a service, or any single installation we visited. The questions and instructions are shown below with the results for the closed-ended questions. Survey of Navy and Marine Corps Focus Group Participants Instructions: Please complete the entire survey below. Do not include your name or other identifying information. Once finished, please place the completed survey back in the envelope and return the envelope. 1. Have you experienced hazing in the Navy/Marine Corps? I’m not sure Total 14 4 36 9 5 2 55 15 2. (If “Yes” or “I’m not sure” for 1) What happened? (Please briefly describe the event(s)) 3. In the group discussion we talked about two examples that some would consider hazing. If these examples happened in your unit, would it be OK with the unit leadership? (check one for each row) Crossing the Line (Navy)/Pinning (Marine Corps) I don’t know 4. Some activities that are traditions in the Marine Corps/Navy are now considered hazing. Is it important to continue any of these activities? Please explain why or why not? 5. Have you received hazing prevention training in the Navy/Marine Corps? 6. Is there anything else you want us to know about hazing in the Navy/Marine Corps? In addition to the contact named above, key contributors to this report were Kimberly Mayo, Assistant Director; Tracy Barnes; Cynthia Grant; Simon Hirschfeld; Emily Hutz; Ronald La Due Lake; Alexander Ray; Christine San; Monica Savoy; Amie Lesser; Spencer Tacktill; and Erik Wilkins-McKee.
Initiations and rites of passage can instill esprit de corps and loyalty and are included in many traditions throughout DOD and the Coast Guard. However, at times these, and more ad hoc activities, have included cruel or abusive behavior that can undermine unit cohesion and operational effectiveness. Congress included a provision in statute for GAO to report on DOD, including each of the military services, and Coast Guard policies to prevent, and efforts to track, incidents of hazing. This report addresses the extent to which DOD and the Coast Guard, which falls under the Department of Homeland Security (DHS), have (1) developed and implemented policies to address incidents of hazing, and (2) visibility over hazing incidents involving servicemembers. GAO reviewed hazing policies; assessed data on hazing incidents and requirements for and methods used to track them; assessed the results of organizational climate surveys that included questions on hazing; conducted focus groups with servicemembers during site visits to two installations selected based on available hazing and sexual assault data, among other factors; and interviewed cognizant officials. The Department of Defense (DOD), including each of the military services, and the Coast Guard have issued policies to address hazing, but generally do not know the extent to which their policies have been implemented. The military services' and Coast Guard's policies define hazing similarly to DOD and include servicemember training requirements. The military service and Coast Guard policies also contain guidance, such as responsibilities for policy implementation and direction on avoiding hazing in service customs and traditions, beyond what is included in DOD's policy. However, DOD and the Coast Guard generally do not know the extent to which their policies have been implemented because most of the services and the Coast Guard have not conducted oversight through regular monitoring of policy implementation. The Marine Corps conducts inspections of command hazing policy on issues such as providing servicemembers with information on the hazing policy and complying with hazing incident reporting requirements. While these inspections provide Marine Corps headquarters officials with some information they can use to conduct oversight of hazing policy implementation, they do not necessarily cover all aspects of hazing policy implementation. Without routinely monitoring policy implementation, DOD, the Coast Guard, and the military services may not have the accountability needed to help ensure efforts to address hazing are implemented consistently. DOD and the Coast Guard have limited visibility over hazing incidents involving servicemembers. Specifically, the Army, the Navy, and the Marine Corps track data on reported incidents of hazing, but the data are not complete and consistent due to varying tracking methods that do not always include all reported incidents. For example, until October 2015, the Army only tracked cases investigated by criminal investigators or military police, while the Navy required reports on substantiated hazing cases and the Marine Corps required reports on both substantiated and unsubstantiated cases. The Air Force and Coast Guard do not require the collection of hazing incident data, and instead have taken an ad hoc approach to compiling relevant information to respond to requests for such data. In the absence of guidance on hazing data collection, DOD and the Coast Guard do not have an accurate picture of reported hazing incidents across the services. In addition, DOD and the Coast Guard have not evaluated the prevalence of hazing. An evaluation of prevalence would provide information on the extent of hazing beyond the limited data on reported incidents, and could be estimated based on survey responses, as DOD does in the case of sexual assault. Service officials said that currently, reported hazing incidents are the primary indicator of the extent of hazing. However, data obtained through other sources suggest that hazing may be more widespread in DOD and the Coast Guard than the current reported numbers. For example, GAO analysis of organizational climate survey results from 2014 for the military services and the Coast Guard found that about 12 percent of respondents in the junior enlisted ranks indicated their belief that such incidents occur in their units. Although these results do not measure the prevalence of hazing incidents, they yield insights into servicemember perceptions of hazing, and suggest that an evaluation of the extent of hazing is warranted. Without evaluating the prevalence of hazing within their organizations, DOD and the Coast Guard will be limited in their ability to effectively target their efforts to address hazing. GAO is making 12 recommendations, among them that DOD and the Coast Guard regularly monitor policy implementation, issue guidance on the collection and tracking of hazing incident data, and evaluate the prevalence of hazing. DOD and DHS concurred with all of GAO's recommendations and have begun taking actions to address them.
As previously noted, the capital surplus account is adjusted to a level equal to the paid-in capital account. This adjustment, however, is made at the end of the calendar year. During the year, another capital account, undistributed net income, reflects the amount of net earnings for the current year that have not been distributed. Each week, the sum of the balance in the capital surplus account and undistributed net income is compared with the paid-in capital account. If the amount of the capital surplus account and undistributed net income combined is greater than capital paid-in, the excess is paid to the Treasury a week later. This payment in turn reduces the undistributed net income account. At the end of the calendar year, the balance in the undistributed net income is transferred to the capital surplus account up to the amount of paid-in capital. Any remaining balance is distributed to the Treasury. Essentially, the capital surplus account represents earnings retained from prior years, and the undistributed net income represents earnings retained from the current year. Both the capital surplus account and the undistributed net income account provide a cushion against losses. Any Reserve Bank losses first reduce the undistributed net income account. The capital surplus account is then reduced if the undistributed net income account is not sufficient to absorb the loss. Transfers of the Reserve Banks’ net earnings to the Treasury are classified as federal receipts. Federal receipts consist mostly of individual and corporate income taxes and social insurance taxes but also include excise taxes, compulsory user charges, customs duties, court fines, certain license fees, and the Federal Reserve System’s deposit of earnings. The Treasury securities held by Reserve Banks are considered part of the federal debt held by the public. Federal debt consists of securities issued by the Treasury and a relatively small amount issued by a limited number of federal agencies. Federal debt is categorized into debt held by the public and debt held by government accounts. Debt held by the public is that part of the gross federal debt held outside of federal budget accounts, and this includes any federal debt held by individuals, corporations, state or local governments, the Federal Reserve System, and foreign governments and central banks. The Consolidated Appropriations Act of 2000 directed the Reserve Banks to transfer to the Treasury additional surplus funds of $3.752 billion during fiscal year 2000. The Federal Reserve System transferred the funds on May 10, 2000. Under the act, the Reserve Banks were not permitted to replenish their accounts during fiscal year 2000. Once the Reserve Banks were legally permitted to replenish the accounts, they did. By December 31, 2000, the capital surplus account was replenished for 11 of the 12 Reserve Banks. The Federal Reserve System maintains a capital surplus account to provide additional capital to cushion against potential losses. However, Federal Reserve Board officials have noted that it can be argued that a central bank, including the Federal Reserve System, may not need to hold capital to absorb losses, mainly because a central bank can create additional domestic currency to meet any obligation denominated in that currency. Federal Reserve Board officials acknowledged that determining the appropriate level of a central bank’s capital account is difficult. The Federal Reserve Board’s policy of maintaining the capital surplus account at the same level as that of the paid-in capital account has resulted in the capital surplus account growing from $4.5 billion in 1996 to $7.3 billion in 2001. The Federal Reserve System maintains the capital surplus account primarily as a cushion against losses. The Financial Accounting Manual for Federal Reserve Banks states that the primary purpose of the Federal Reserve capital surplus account is to provide capital to supplement paid-in capital for use in the event of loss. According to Board officials, the capital surplus reduces the probability that total Reserve Bank capital would be wiped out by a loss as a result of dollar appreciation, sales of Treasury securities below par value, losses associated with discount window lending, or any other losses. Individual Reserve Banks use the capital surplus account when they experience losses greater than the amount in their undistributed net income account. Federal Reserve Board officials also noted that it could be argued that maintaining capital, including the surplus account, provides an assurance of a central bank’s strength and stability to investors and foreign holders of U.S. currency. Currently, a significant portion of U.S. currency is held abroad. According to one estimate published by the Federal Reserve Board, $279.5 billion in U.S. currency was held overseas as of the fourth quarter of 2001. The total amount of Federal Reserve notes outstanding was $611.8 billion as of December 31, 2001. Federal Reserve Board officials stated that the demand for U.S. currency conceivably could fall if a large loss wiped out the Federal Reserve’s capital accounts, giving a misimpression that the Federal Reserve was insolvent. “In the abstract, a central bank with the nation’s currency franchise does not need to hold capital. In the private sector, a firm’s capital helps to protect creditors from credit losses. Creditors of central banks however are at no risk of a loss because the central bank can always create additional currency to meet any obligation denominated in that currency.” Moreover, an official representing one of the four foreign central banks that we contacted agreed that the concept of solvency was essentially meaningless for a central bank in its role as a creator of currency, and that a massive loss could make a central bank technically insolvent, but that there would be no impairment of its ability to create and manage assets and issue currency. However, Federal Reserve Board officials told us that, because the maintenance of the capital surplus account is “costless” to the taxpayer and to the Treasury, the argument that a central bank does not need capital is not a rationale for reducing the surplus to any particular level, including zero. We will discuss the possible effects of a change in the surplus account on the federal budget and the economy later in this report. Federal Reserve Board officials told us that determining the appropriate level for a central bank’s capital account is difficult. The growth in the Federal Reserve System’s capital surplus account can be attributed to growth in the banking system together with the Federal Reserve Board policy of equating the amount in the capital surplus account with paid-in capital. The Federal Reserve System surplus has grown along with the paid- in capital account which itself grew as a result of expansion of the banking industry capital during the late 1990s. In 1996, the capital of all member banks (state member banks and national banks) totaled almost $157 billion; by December 2001, it was $267 billion. Because the Federal Reserve Act requires members to subscribe to a stock subscription equaling 6 percent of their capital and surplus, half of which is to be paid in, the Reserve Banks’ capital paid-in accounts have increased along with member bank capital and surplus. As a result of the Federal Reserve Board’s policy, the Federal Reserve capital surplus account grew correspondingly. The level of the Federal Reserve capital surplus account is not based on any quantitative assessment of the potential financial risk associated with the Federal Reserve’s assets or liabilities. According to a Federal Reserve Board official, the current policy of setting the levels of surplus through a formula reduces the potential for any misperception that the surplus is manipulated to serve some ulterior purpose. In response to our 1996 recommendation that the Federal Reserve Board review its policies regarding the surplus account, the Federal Reserve Board conducted an internal study that did not lead to major changes in policy. Three of the four central banks that we contacted had capital accounts that included ownership shares as well as “surplus” accounts with functions similar to the Federal Reserve System capital surplus account (see table 1). We found the levels of these accounts varied in size and, with the exception of the Bank of England, officials from the four central banks explained that the levels were established by law. The Bundesbank and the ECB had also established additional “provision” accounts that were not part of the subscribed capital or surplus accounts, but that served as an additional cushion against losses. The provision accounts were set up primarily to offset the central banks’ exposure to foreign exchange rate and interest rate risk and their levels are evaluated on an annual basis. In contrast to these central banks, the Bank of Canada does not require an additional account to buffer the impact of foreign exchange rate and interest rate movements on their assets because it does not hold a significant amount of assets denominated in currencies other than the Canadian dollar on its balance sheet. Similarly, its domestic assets holdings of Canadian government securities are diversified across maturities, approximately mirroring the issuance of Canadian government securities. It should be noted that accounts at the four central banks that we contacted are not fully comparable with the Federal Reserve System capital surplus account because of differences in accounting practices. The Bundesbank and the ECB use accounting methods that differ from the Federal Reserve’s to cushion against foreign currency risk and have set up “revaluation accounts” representing valuation reserves arising from unrealized gains on assets and liabilities, including foreign currency. The levels of these accounts vary automatically in accordance with regular market valuations of the assets held compared to their original cost. The Bundesbank, bearing especially the foreign exchange risk in mind, has established a “provisions” account. When determining how much to put into this account, the Bundesbank evaluates its exposure to foreign exchange risk and interest rate risk, to the extent of which these risks are not already covered by the “revaluation account.” In addition to the “provisions” account, the Bundesbank also has a “statutory reserves” account that serves as an additional financial buffer against risk. This reserve account may be used only to offset falls in value and to cover other losses. It is derived from the net profit each year and has a maximum level established by legislation. The levels of capital that the central banks maintain are not directly comparable with the Federal Reserve’s capital (including the surplus account) for several reasons. First, as previously described, there are differences in the accounting systems among the central banks. The Bundesbank and the ECB, for instance, use accounts that are not part of capital to serve as a cushion against loss. Additionally, when determining the levels of the “provisions” account, the Bundesbank and the ECB evaluated their exposure to exchange rate and interest rate risk. The Bank of Canada and the Bank of England, in contrast, do not face significant foreign exchange rate exposure in their accounts. The Federal Reserve System has not had an annual operating loss since 1915. From 1989 to 2001, the Reserve Banks incurred some weekly losses in which their weekly earnings were not sufficient to absorb the losses. The individual Reserve Banks drew on their capital surplus accounts at least 158 times to absorb weekly losses during the years of 1989 to 2001. The frequency of transferring surplus funds to absorb losses declined during the years from 1998 and 2001. Although numerous factors can influence a Reserve Bank’s net earnings, it appears that most of the weekly losses incurred by the Reserve Banks can be attributed to foreign currency revaluation. Federal Reserve Board officials noted that since the Reserve Banks began revaluing the Federal Reserve System’s foreign currency holdings on a daily basis rather than a monthly basis in July 2001, they expect the size of these revaluations will be reduced. The individual Reserve Banks transferred funds occasionally from their capital surplus accounts to absorb losses from 1989 through 2001. On the basis of Federal Reserve Board data, 11 of the 12 Reserve Banks reported a total of 352 weeks in which earnings were less than expenses and losses. (The 352 weeks were out of 7,337 possible occurrences during the approximately 13 years of data at the 11 Reserve Banks.) The individual Reserve Banks transferred from the capital surplus accounts cumulatively 158 times, when the weekly loss was greater than the amount in the undistributed net income account. For the other 194 weekly losses, the undistributed net income was sufficient to absorb the losses. The amount and frequency of the weekly losses incurred and the use of the capital surplus accounts varied across Reserve Banks. The Reserve Banks did not incur losses at the same frequency or magnitude because their portfolios of Treasury securities and foreign currency were not proportional across Reserve Banks. The size of a Reserve Bank’s Treasury securities portfolios is driven largely by the value of Federal Reserve notes issued by the Reserve Bank, but the size of its foreign currency portfolio is determined by the prior years’ capital and surplus account levels. Four of 11 Reserve Banks (Atlanta, Dallas, Kansas City, and Philadelphia) had to transfer funds from their surplus accounts to cover more than 50 percent of their weekly losses (see table 2). The remaining 7 Reserve Banks transferred capital surplus funds that ranged from 26 percent to 46 percent of their weekly losses. The Federal Reserve Bank of Minneapolis (FRBM) is not included in the table because, as explained below, the structure of its assets and liabilities differed significantly from that of the other Reserve Banks over the period surrounding the century date change and its results would bias the overall results. If the FRBM’s capital surplus transfers were included, the frequency would increase to 207 times. From May 2000 to December 2001, FRBM drew down its surplus account 24 times to absorb its weekly losses, compared with only 25 times for the entire previous 11-year period (from Apr. 5, 1989, through Mar. 1, 2000). FRBM’s surplus has not been fully restored to a level at which its value equates with its paid-in capital, and it has not made a payment to the Treasury since the statutorily mandated surplus transfer by the Consolidated Appropriations Act of 2000 was completed in May 2000. The Federal Reserve Board staff provided us with two reasons for this condition. First, FRBM’s share of earnings was lower than that for the average Reserve Bank compared with its share of the $3.752 billion transfer in May 2000. According to a Federal Reserve Board official, FRBM’s lower earnings resulted from its relatively small share of the System Open Market Account compared with the other 11 Reserve Banks. For Year 2000 contingency purposes, FRBM stored a large amount of currency for the other Reserve Banks. FRBM was selected because its bank building had a large cash vault. To obtain currency to store for the other Reserve Banks, FRBM had to purchase higher level of currency from the other Reserve Banks. FRBM essentially purchased this currency by reducing its share of the System Open Market Account. Secondly, increases in FRBM’s capital paid-in account due to mergers and acquisitions by its member banks increased the amount of capital surplus needed to match the value of its paid-in capital. Federal Reserve Board staff expect that FRBM will resume weekly payments to the Treasury in late 2002 or early 2003. During the period from 1989 to 2001, none of the Reserve Banks, including FRBM, entirely depleted their surplus accounts. Thus, the paid-in capital accounts were never needed to cushion any of the weekly losses the Reserve Banks incurred. After 1997, the frequency of capital surplus transfers by the Reserve Banks was considerably lower. From 1998 to 2001, the Federal Reserve System, excluding FRBM, averaged almost 5 surplus transfers annually compared with the period from 1989 to 1997, when the Federal Reserve System averaged over 15 surplus transfers annually. In 2001, the individual Reserve Banks, excluding FRBM, withdrew from their capital surplus account a total of eight times for a cumulative total of $292.4 million, almost 4.1 percent of the Federal Reserve System’s capital surplus account. It appears that most of the weekly losses, which drew on the capital surplus account, resulted from revaluation of foreign currency assets. Federal Reserve Board officials told us that, in reviewing the data for the losses, they could not recall or identify reasons other than foreign currency revaluation as the primary reason for the weekly losses. Although the Federal Reserve System’s asset portfolio is predominantly Treasury securities, it does include foreign currency holdings. As of December 31, 2001, the Federal Reserve’s foreign currency holdings were equivalent to $7.3 billion of euros, $7.2 billion of yen, and $65.6 million of interest receivables. When the dollar appreciates against a foreign currency, the value of the foreign currency holdings declines in dollar terms, and the Reserve Banks may incur a loss. According to Federal Reserve officials, such losses are the primary reason that Reserve Banks have drawn on their capital surplus accounts. Federal Reserve Board data on the Reserve Banks’ weekly losses that occurred since 1997 also suggested that the losses resulted from downward revaluation of foreign currency assets. Although none of the Reserve Banks’ capital surplus accounts were ever entirely depleted, all of the capital surplus accounts were significantly reduced by one particular foreign currency loss. During the week of April 3, 1991, every Reserve Bank, including FRBM, recognized a loss that drew down their capital surplus accounts, reducing the Federal Reserve System’s capital surplus by $1.67 billion. This loss represented almost a 67 percent reduction in the Federal Reserve System’s capital surplus account. As of December 31, 1991, the capital surplus account totaled $2.65 billion. For 10 of 12 Reserve Banks, the reductions in capital surplus that week were the largest incurred for the 12-year period. The reductions that week ranged from 49 percent to 93 percent of the respective Reserve Banks’ capital surpluses. The Reserve Banks of Dallas and Philadelphia needed to withdraw 91 percent and 93 percent of their capital surplus accounts, respectively, to absorb the size of the loss. According to a Federal Reserve Board official, the huge net weekly loss was caused by a sharp appreciation of the U.S. dollar near the conclusion of the Gulf War. Weekly losses resulting from revaluation of foreign currency holdings may occur less frequently in the future because of a recent change in Federal Reserve System’s procedures that resulted from the Federal Reserve Board study that was conducted following our 1996 report. The Reserve Banks now revalue their foreign currency holdings on a daily basis rather than a monthly basis, and Federal Reserve Board staff told us that they expect daily basis revaluations, which began in July 2001, will lessen the volatility of these revaluations. Under the previous arrangement, the earnings of the week during which the revaluation occurred had to absorb any revaluation loss that had built up during the month since the previous revaluation, often leading to losses during that week. Daily revaluations generally lead to smaller revaluation losses than revaluing on a monthly basis, according to Federal Reserve Board officials, making it less likely that they will exceed weekly earnings. Reducing the Federal Reserve surplus account would create a one-time increase in federal government receipts, thereby reducing the budget deficit (or increasing the federal budget surplus) at the time of the transfer. Because the Federal Reserve System is not included in the federal budget, a Reserve Bank transfer to the Treasury is recorded as a receipt under current budget accounting. This move would reduce future Reserve Banks’ earnings and in turn reduce their transfers to the Treasury in subsequent periods. Since the one-time transfer from the Federal Reserve System also increases Treasury’s cash balance over time, the Treasury would sell fewer securities to the public and thus pay less interest to the public. Over time, the lower interest payments to the public approximately offset the lower receipts from Federal Reserve earnings. After the temporary capital surplus reduction in 2000, transfers of Reserve Bank net earnings to the Treasury were lower as the Reserve Banks replenished their capital surplus accounts. However, a permanent capital surplus reduction would also reduce future Reserve Bank earnings because the Reserve Banks would hold a smaller portfolio of securities. Since reducing the surplus does not produce new resources for the government, however, there would not be significant economic effects from its reduction. “…the transfer of surplus funds from the Federal Reserve to the Treasury has no import for the fiscal status of the Federal government… Where the funds reside has no economic significance. Hence, any transfer of the Federal Reserve surplus fund to the Treasury would have no effect on national savings, economic growth, or income.” Permanently reducing the Federal Reserve System’s capital surplus account would yield a one-time increase in federal receipts, under budget accounting; the transfer would have no net budgetary effect in subsequent years. Both OMB and Treasury officials told us that reducing the capital surplus account would cause the Reserve Banks to sell some of their Treasury securities portfolio. This move would reduce Reserve Bank earnings and, in turn, reduce payments to the Treasury in subsequent periods. This reduction in future transfers to the Treasury would occur even if the Reserve Banks were not allowed to replenish their capital surplus accounts. As a hypothetical example, suppose that the Federal Reserve System were to reduce permanently its surplus account by $1 billion, and, to simplify the example, that it did so by selling $1 billion in Treasury securities at the end of a fiscal year and transferring the proceeds to the Treasury. This one-time transfer would increase federal revenues by $1 billion and, assuming no changes in fiscal policy, reduce that year’s deficit by $1 billion. With a smaller portfolio, the Reserve Banks’ annual earnings on their Treasury securities would decline by about $43 million, on the basis of the August 2002 interest rate on newly issued 10-year notes. As a result, the Federal Reserve’s annual payments to the Treasury would also decline by about $43 million for each of the next 10 years. This $43 million, however, is approximately offset by a decrease in interest that Treasury must pay. Receipt of the $1 billion permits Treasury to sell less debt to the public. Continuing the hypothetical example, if the Treasury were to use the $1 billion to reduce its issuance of 10-year notes, its borrowing costs would decrease by $43 million. Treasury’s continued outlays for interest on the $1 billion of securities that the Federal Reserve System sold would thus be approximately offset by the interest expense that Treasury no longer would incur in selling the new securities. OMB staff explained that it would be impossible to quantify the exact budgetary effect of permanently reducing the capital surplus account, since the securities that the Federal Reserve System would sell to reduce the surplus account would not necessarily have the same interest rate as those that Treasury would no longer sell, nor the same interest rate as Treasury receives on its operating accounts held at the Federal Reserve System. In a provision of the Omnibus Budget Reconciliation Act of 1993, Congress directed for fiscal years 1997 and 1998 that the amount in the surplus account of any Reserve Bank in excess of the amount equal to 3 percent of the total paid-in capital and surplus of its member banks should be transferred to the Treasury. Moreover, the act required that the surplus accounts be reduced an additional $106 million in fiscal year 1997 and $107 million in fiscal year 1998 and that the amounts be transferred to Treasury. These transfers were made on October 1, 1997, and 1998, respectively. Also, under the act, the Reserve Banks were not permitted to replenish the surplus for these amounts during fiscal years 1997 and 1998. As of December 31, 1998, the capital surplus account and the paid-in capital account were equal. Although the act did not specifically state the purpose of those transfers, its effect was to reduce the federal government’s deficit in those years. The capital surplus transfer mandated by the Consolidated Appropriations Act of 2000 resulted in a one-time increase in reported federal receipts but was clearly offset by lower Reserve Bank net earnings payments to the Treasury in the subsequent fiscal year. One reason for this is that the 2000 surplus reduction was temporary: the act prohibited the Reserve Banks from replenishing their surplus funds by the amounts they transferred in that fiscal year but did not prohibit subsequent replenishment. As previously stated, the Consolidated Appropriations Act directed the Reserve Banks to transfer to the Treasury surplus funds of $3.752 billion during fiscal year 2000. Under the act, the Reserve Banks were not permitted to replenish the capital surplus amounts transferred during fiscal year 2000. Because the Federal Reserve Board has discretion over how much it transfers to the Treasury, the Reserve Banks began replenishing the accounts as soon as they were legally allowed to in October 2000. To replenish the capital surplus accounts, the Reserve Banks ceased payments of their net earnings to the Treasury until the capital surplus accounts were replenished. In November 2000, CBO reported that receipts from the Federal Reserve System were $1 billion lower in October 2000 than they had been in October 1999 because the Federal Reserve System had temporarily stopped its weekly payments to the Treasury. Moreover, CBO noted that the Reserve Banks were replenishing their capital surplus accounts from earnings that would otherwise be paid to the Treasury and were not likely to resume their weekly payments until December 2000 or possibly later. Federal Reserve Board data on the replenishment of the Reserve Bank surplus accounts indicated that the Reserve Banks of Boston, Chicago, Dallas, Kansas City, and Philadelphia did not transfer any earnings to Treasury for as long as 5 to 6 weeks. Any reduction in the capital surplus account would not have a significant effect on Treasury’s financial management, according to Treasury officials. First, the capital surplus account represents a small fraction of the total federal budget. The capital surplus account was $7.3 billion as of December 31, 2001, while total federal outlays during fiscal 2001 totaled $1,863.9 billion; thus the capital surplus account was less than 1/10 of 1 percent of outlays. These officials observed that the capital surplus account balance represented a small percentage of the total amount of Treasury securities outstanding in a year. As of June 30, 2002, the total amount of Treasury securities outstanding was $6,126.5 billion. Finally, these officials noted that while the surplus account would be significant relative to Treasury’s cash balances, these balances vary considerably on a monthly basis. While Treasury monthly cash balances averaged about $24 billion in fiscal 2001, for instance, average monthly balances ranged from $12.1 billion to $43.2 billion. The Federal Reserve System maintains the surplus account to absorb losses. Since 1989, most of the weekly losses that resulted in using the capital surplus account were apparently due to monthly revaluation of the Federal Reserve System’s holdings of foreign currencies. In most cases, the capital surplus account was replenished soon after absorbing the loss, and no Reserve Bank ever completely depleted its capital surplus account. Since 2001, however, the Federal Reserve System has begun recognizing gains or losses on its foreign currency holdings on a daily basis rather than a monthly basis. This change should lessen the use of the capital surplus account. The surplus account has grown substantially since 1996, reflecting the growth in the member banks’ capital and therefore their paid-in capital, which the Federal Reserve System uses as the basis for determining the targeted value of the surplus account. Reducing the surplus account, however, would provide only a one-time increase in measured federal government receipts, reflecting a transfer from Reserve Banks to the Treasury. There would not be a significant economic effect from reducing the surplus account. We requested comments on a draft of this report from the Federal Reserve Board, OMB, and the Treasury. The Federal Reserve Board’s comments are reprinted in appendix II. The Federal Reserve Board said that it generally agreed with the information in and conclusions of the report. The Federal Reserve Board also noted that it had separately provided technical corrections; we have incorporated these corrections where appropriate. OMB and the Treasury declined comment, although their staffs provided technical corrections that we have incorporated. We also obtained and incorporated technical corrections on a draft of this report from CBO. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issuance date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs, and the House Committee on Financial Services. We will also send copies to the Chairman of the Board of Governors of the Federal Reserve System, the Secretary of the Treasury, the Director of the Congressional Budget Office, and the Director of the Office of Management and Budget. We will make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me or James McDermott, Assistant Director, at (202) 512-8678. Other key contributors to this report were Nancy Eibeck and Josie Sigl. To describe the Federal Reserve System’s rationale for maintaining a capital surplus account and to understand the capital accounts held at the Reserve Banks, we interviewed Federal Reserve Board officials primarily from the Division of Monetary Affairs and the Division of Reserve Bank Operations and Payment Systems. We reviewed and analyzed sections of the Federal Reserve Act pertaining to the paid-in capital and surplus transfers and the Consolidated Appropriations Act of 2000. We also reviewed the financial statements of the Reserve Banks from 1996 to 2001. To review the policies and practices of foreign central banks regarding accounts that serve similar functions as the capital surplus account, we judgmentally selected four central banks: the Bank of Canada, the Bank of England, the Bundesbank, and the European Central Bank. To verify our interpretation of their published reports, legal requirements, and financial statements, we contacted members of the staffs of the Bank of England and Her Majesty’s Treasury (Treasury of the United Kingdom), the Bank of Canada, the Bundesbank, and the European Central Bank. We collected and reviewed annual financial statements from the four central banks for the years from 1996 to 2001 to compare/contrast capital and surplus accounts, and asset and liability structures. The comparability of these data with the Federal Reserve Board is limited, however, due to differences in accounting practices. To describe the Reserve Banks’ use of the capital surplus account from 1989 to 2001, we analyzed historical data on weekly losses for all 12 Reserve Banks. These data included the net income or loss of the prior Wednesday, the amount of weekly loss, the amount of the Treasury payment, the amount of surplus withdrawn, the amount in the undistributed net income, and the amount in the surplus before and after the weekly loss. Federal Reserve Board staff collected the data from the 12 Reserve Banks’ balance sheet information. We did not audit Reserve Bank accounting from which the data on the weekly losses were derived. Also, we did not review any weeks during the time period that the Reserve Bank revenues and gains for a week were greater than the expenses. The data we reviewed were for those weeks when the expenses and losses were greater than the revenues and gains for each of the 12 Reserve Banks. The data are limited on the identification of the cause of the weekly losses incurred by the Reserve Banks. Federal Reserve Board staff confirmed the cause for only those weekly losses that occurred during the time period of 1997 to 2001. We also analyzed the Board of Governors of the Federal Reserve System’s Annual Reports from 1996 to 2001 to determine the trend in both the capital surplus and the paid-in capital accounts. To determine the reason for the growth in the paid-in capital accounts, we reviewed Federal Reserve Board data on the aggregate member bank capital and surplus from 1996 to 2001. According to Federal Reserve Board staff, the aggregate data provided us were drawn from bank call reports. To describe and determine the potential effects of reducing or eliminating the surplus account on the federal budget and the economy, we interviewed officials from the Federal Reserve Board, the Department of the Treasury, the Office of Management and Budget, and the Congressional Budget Office (CBO). We reviewed the Consolidated Appropriations Act of 2000 (P.L. 106-113, Section 302). We also reviewed reports from CBO on the Reserve Banks’ transfers of net earnings to the Treasury. We conducted our work in Washington, D.C., between April 2002 and August 2002 in accordance with generally accepted government auditing standards. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The Board of Governors of the Federal Reserve System (Federal Reserve Board) reviewed its policies regarding the size of the Federal Reserve Banks' combined capital surplus account to determine if opportunities exist to decrease the amount held in the account. The consolidated capital surplus account is the aggregate of separate surplus accounts held at each of the 12 Reserve Banks, and the account represents cumulative retained net earnings for the Reserve Banks--that is, cumulative net earnings not paid to the Department of the Treasury. The Reserve Banks use their capital surplus accounts to act as a cushion to absorb losses. The Financial Accounting Manual for Federal Reserve Banks says that the primary purpose of the surplus account is to provide capital to supplement paid-in capital for use in the event of loss. Selected major foreign central banks maintain accounts with functions similar to the Federal Reserve System's capital surplus account. Although their accounts are not fully comparable with the Federal Reserve System capital surplus account, the Bank of England, the Bundesbank, and the European Central Bank have capital surplus or reserve accounts in addition to their paid-in capital accounts that are used as cushions against loss. The Federal Reserve System calculates earnings and transfers excess earnings to the Treasury on a weekly basis. Although the Federal Reserve System has not had an annual operating loss since 1915, the Reserve Banks recorded some weekly losses between 1989 through 2001, thus temporarily reducing their capital surplus accounts to cover these weekly losses. Reducing the Federal Reserve System capital surplus account would create a one-time increase in federal receipts, but the transfer by itself would have no significant long-term effect on the budget or the economy. Amounts transferred to the Treasury from reducing the capital surplus account would be treated as a receipt under federal budget accounting but do not produce new resources for the federal government as a whole
At any given time, the United States has a large portion of its military personnel serving abroad in forward-deployed locations. The forward presence of military forces at overseas locations is critical to supporting U.S. strategic interests. Forward-deployed forces provide the basic building blocks with which to project military power in crises and strengthen U.S. military access. While the numbers of personnel and locations vary with the frequency and types of military operations and deployment demands, military operations in Iraq and Afghanistan have led to the creation of several hundred new locations. Operational control of U.S. military forces at forward-deployed locations is assigned to the nation’s six geographic, unified overseas regional commands, which include Central Command. For current operations, Central Command has identified the need for forward-deployed locations within its area of responsibility to meet mission requirements, and its military service components have been responsible for establishing and maintaining the locations. DOD is likely to continue its use of forward-deployed locations in support of future U.S. defense strategy. In recent years, DOD has been undergoing a transformation to develop a defense strategy and force structure capable of meeting changing global threats. As part of its transformation, DOD has been reexamining overseas basing requirements to allow for greater U.S. military flexibility to combat conventional and asymmetric threats worldwide. U.S. military presence overseas has been converting from a posture established on familiar terrain to counter a known threat to one that is intended to be capable of projecting forces from strategic locations into relatively unknown areas in an uncertain threat environment. In 2008, more than 68 million gallons of fuel, on average, were supplied by DOD each month to support U.S. forces in Iraq and Afghanistan. While DOD’s weapon systems require large amounts of fuel—a B-52H, for example, burns approximately 3,500 gallons per flight hour—the department reports that the single largest battlefield fuel consumer is generators. Generators provide power for base support activities such as air conditioning/heating, lighting, refrigeration, and communications. A 2008 Defense Science Board Task Force report noted that Army generators alone consume about 26 million gallons of fuel annually during peacetime and 357 million gallons annually during wartime. Fuel is delivered to forward-deployed locations in Iraq via three main routes— from Kuwait in the south, Jordan in the west, and Turkey in the north— and to forward-deployed locations in Afghanistan via two main routes— from Central Asian states in the north and from Pakistan in the east. According to the Defense Energy Support Center, an organization within the Defense Logistics Agency that manages contracts for the department’s fuel acquisitions and distribution, approximately 1.7 million gallons of jet fuel are delivered into Iraq and approximately 300,000 gallons of jet fuel are delivered into Afghanistan each day, in addition to other types of fuel, such as diesel, motor gasoline, and aviation gasoline. At one truck fill stand that we visited in Kuwait in June 2008, about 125 trucks, each holding 9,000 gallons of fuel, were loaded daily for delivery to forward- deployed locations in Kuwait and Iraq. High fuel requirements on the battlefield can place a significant logistics burden on military forces, exposing supply convoys to risk. For example, long truck convoys moving fuel to forward-deployed locations have encountered enemy attacks, severe weather, traffic accidents, and pilferage. Army officials have estimated that about 70 percent of the tonnage required to position its forces for battle consists of fuel and water. Most fuel deliveries to forward-deployed locations in Afghanistan are made by commercial contractors, and there is no military-provided protection for the supply convoys other than the protection contractors provide themselves. DOD officials reported that in June 2008, for example, 44 trucks and 220,000 gallons of fuel were lost due to attacks or other events. While fuel delivery issues have not been as severe in Iraq recently, the U.S. military provides force protection to supply convoys in Iraq, increasing the logistics burden. Fuel delivery to locations outside of Iraq and Afghanistan may not be subject to battlefield conditions but is also logistically complex. For example, Camp Lemonier receives its fuel through the Djiboutian port. Fuel is loaded from the port into the storage tanks as it arrives, and trucks also make daily runs to the port to bring fuel to the camp. DOD reported that it consumed almost 4.8 billion gallons of mobility fuel and spent $9.5 billion in fiscal year 2007, in addition to its costs for fuel consumed at fixed U.S. installations. While fuel costs represent less than 3 percent of DOD’s total budget, they can have a significant impact on the department’s operating costs. DOD has estimated that for every $10 increase in the price of a barrel of oil, DOD’s operating costs increase by approximately $1.3 billion. DOD organizations pay a standard price for fuel that differs from the market price. The Office of Management and Budget (OMB) establishes for DOD the price the department will use when constructing its budget for upcoming fiscal years. DOD in turn uses OMB's price in establishing the standard price to be used for a barrel of fuel for budgeting purposes by DOD's customers, such as the military services. Because of the volatility of world petroleum prices, the standard price for a barrel of fuel included in the President's annual budget request for DOD may be lower or higher than the actual price established by the world market at any point in time after DOD's budget request is submitted to the Congress. The fiscal year 2009 President's budget assumed a standard fuel price of $115.50 per barrel. At the time of this report, the price DOD charged its customers was $104.58, or $2.49 per gallon of jet fuel (JP8). In the past, DOD's standard fuel price was typically adjusted annually. However, with rising fuel costs in recent years, the price has been adjusted more frequently. Effective July 1, 2008, for example, DOD raised the standard fuel price per barrel from $127.68 to $170.94; and effective December 1, 2008, DOD lowered the standard fuel price per barrel from $170.94 to $104.58. Because the military services prepare their annual budgets based on the approved fuel price in the President's budget, market volatility resulting in out-of-cycle fuel price increases can be difficult for the services to absorb. DOD has received supplemental appropriations from the Congress in recent years to cover budget shortages associated with rising fuel prices. Moreover, the fully burdened cost of fuel—that is, the total ownership cost of buying, moving, and protecting fuel in systems during combat—can be much greater than the cost of fuel itself. A 2008 Defense Science Board Task Force report noted that preliminary estimates by the OSD Program Analysis and Evaluation office and the Institute of Defense Analyses indicate that the fully burdened cost of a $2.50 gallon of fuel could begin at about $15, assuming no force protection requirements for supply convoys, and increases as the fuel moves further onto the battlefield. Fuel delivered in-flight has been estimated at about $42 gallon, though the report notes that these figures are low. OSD has initiated a pilot program to determine the fully burdened cost of fuel for three mobile defense systems. Concerns about future fuel costs, price volatility, and fuel availability have led the Air Force to undertake an effort to certify its entire aircraft fleet to run on a synthetic blend of alternative and jet fuels by early 2011. The Air Force has established a goal to acquire 50 percent of the aviation fuels it uses within the United States from domestic sources by 2016. Fuel distribution is a complex process involving several DOD offices. Joint Publication 4-03 sets forth principles and establishes doctrine for bulk petroleum and water in support of U.S. military operations. The combatant commander has the predominant responsibility for fuel within a theater, and this responsibility is discharged by its Joint Petroleum Office. The Joint Petroleum Office is responsible for the overall planning of petroleum for operations, and it may establish sub-area petroleum offices as needed to support specific petroleum requirements. The Director, Defense Logistics Agency, as the integrated materiel manager for bulk petroleum, is responsible for meeting the petroleum support requirements of the combatant commands and military services. These functional responsibilities have been delegated to the Director, Defense Energy Support Center, which is responsible for procurement, transportation, ownership, accountability, budgeting, quality assurance, and quality surveillance. It also plans and budgets for the construction and repair of fuel storage and distribution facilities, monitors the petroleum market, and negotiates international agreements for energy commodities. Each military service has responsibilities for providing petroleum support. The Army normally provides management of petroleum support to U.S. land-based forces of all DOD components. However, actual movement of bulk petroleum may include the use of commercial vehicles and associated infrastructure. The Air Force provides distribution of bulk petroleum products by air within a theater where immediate support is needed at remote locations. The Navy provides bulk petroleum products for U.S. sea- and land-based forces. The Marine Corps maintains a capability to provide bulk petroleum support to Marine Corps units. Within Central Command’s area of responsibility, military units communicate their fuel requirements, which are based on historical usage and planned rotations, to the sub-area petroleum offices. The sub-area petroleum offices in turn provide these requirements to Central Command’s Joint Petroleum Office for validation. Once the requirements are validated, the Defense Energy Support Center determines the most appropriate means to support the requirements and provides for the distribution of the fuel up to the “point of sale.” The point of sale is the point at which the customer takes possession of the fuel. The Defense Energy Support Center owns and tracks the fuel up until this point, at which time the fuel may be placed directly into a weapons system or battlefield storage unit or handed off to the customer to move to a forward-deployed location. Section 902 of the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 requires that DOD establish a Director of Operational Energy Plans and Programs, who shall be appointed by the President and confirmed by the Senate, to provide leadership and facilitate communication and conduct oversight of operational energy plans and programs within the department and military services. Among other responsibilities the director shall establish and maintain an operational energy strategy for the department; serve as the principal adviser to the Secretary of Defense and the Deputy Secretary of Defense on operational energy plans and programs; and consider operational energy demands in defense planning, requirements, and acquisition processes. In addition, the act requires that, within 90 days of the director’s appointment, the secretaries of the military departments each designate a senior official to be responsible for operational energy plans and programs for his respective service. These senior service officials shall be responsible for coordinating with the director and implementing operational energy initiatives. The act further requires DOD to consider fuel logistics support requirements in the department’s planning, requirements development, and acquisition processes—including the consideration of the fully burdened cost of fuel when analyzing fuel-consuming system alternatives. The act also includes other energy requirements, including that DOD conduct a study on the feasibility of using solar and wind energy to provide electricity to deployed forces and the extent to which the use of such alternative energy sources could reduce the risk of casualties associated with convoys supplying fuel to forward-deployed locations. The Secretary of Defense, acting through the director, must also submit, on an annual basis, a report to the congressional defense committees on operational energy management and the implementation of the operational energy strategy. DOD components have efforts under way or planned for reducing fuel demand at forward-deployed locations. Many of these efforts are in a research and development phase, and the extent to which they will be fielded and under what time frame is uncertain. Notable efforts include the application of foam insulation to tent structures, the development of more fuel-efficient generators and environmental control units, and research on alternative and renewable energy sources for potential use at forward- deployed locations. In addition, during our visits to Kuwait and Djibouti, we found local camp efforts aimed at reducing fuel demand. DOD is beginning to apply foam insulation on tents at some forward- deployed locations to reduce energy demand for powering these structures. In 2006, the DOD Power Surety Task Force was created in response to a joint urgent operational needs statement from a senior U.S. commander in Iraq calling for alternative energy sources to reduce the amount of fuel transported to forward-deployed locations for power generation. A mission of the task force is to identify and demonstrate emerging or commercial off-the-shelf technology that can reduce DOD’s fuel demand. As one of several initiatives, the task force has demonstrated the benefits of applying foam insulation on temporary structures such as military tents. According to task force officials, tests show that the application of foam insulation reduces dust, heat, cold, noise, and air conditioning requirements, which in turn reduces generator-powered fuel demand. For example, the officials said that military tents insulated with foam at Fort Benning, Georgia, used half the climate control units and required 75 to 90 percent less power than non-insulated tents. (See fig. 2.) The Power Surety Task Force first demonstrated this technology at Fort Benning in January 2007, and later at some forward-deployed locations in Iraq, Afghanistan, Kuwait, and Djibouti. According to task force officials, based on the results of a recent demonstration of this technology, DOD decided to pursue a large-scale effort to apply foam insulation to temporary structures, such as military tents, in Iraq to reduce the number of generators needed to power those structures. According to a Central Command official, the tent foaming initiative could reduce energy consumption by approximately 50 percent, potentially reducing the number of convoys needed to supply fuel to locations in Iraq, although metrics had not yet been established to systemically measure efficiency. However, the contract for this initiative was terminated prior to completion, effective December 16, 2008. According to another Central Command official, the contract was terminated early due to contractor performance as well as support issues. At the time the contract was terminated, a DOD contractor noted that foam insulation had been applied to about 900 temporary structures (3.8 million square feet) at 10 forward- deployed locations in Iraq. According to a senior Army official, DOD has also issued a $29 million contract to apply foam insulation to tent structures in Afghanistan, though he did not expect foaming to proceed until after the winter weather in Afghanistan subsided. In addition to foam insulation, the DOD Power Surety Task Force is demonstrating other potential energy-saving technologies for use at forward-deployed locations. The Power Surety Task Force initiated a 3- year demonstration project—called the Net-Zero Plus Joint Capability Technology Demonstration—at the National Training Center in Fort Irwin, California, to demonstrate some of these technologies and solicit feedback from visiting military personnel. (App. III provides additional information on task force initiatives.) Another DOD effort to reduce fuel demand is the development of new fuel- efficient generators and environmental control units. A DOD joint program organization, the Project Manager-Mobile Electric Power office, is responsible for providing a modernized standard family of mobile electric power generators to the military services. According to the office, many of DOD’s generators have been in use for about three decades, exceeding their expected life cycle of 15 years. The office is developing a next generation of generators, called the Advanced Medium Mobile Power Sources, which employ advanced technologies to achieve greater fuel efficiency and other improvements over current military generators. When fully fielded, the new generators are expected to consume approximately 28 million gallons less fuel per year than the tactical quiet generators currently in use by the Army. According to a Project Manager-Mobile Electric Power official, DOD plans to begin procuring these new generators in 2010 at a weighted average cost of about $18,000 per generator. In addition, officials said that the Project Manager-Mobile Electric Power office intends to replace its current environmental control units with improved environmental control units to provide cooling, heating, and dehumidifying for servicemembers and material systems. The improved units are expected to reduce energy consumption by up to 25 percent over current units. (See fig. 3.) An official told us that one version of the improved units is currently in low-rate initial production and a contract for another version is expected to be awarded in February 2009. The Project Manager-Mobile Electric Power office also has initiatives under way to improve the efficiency of power generation. For example, the office has fielded a more fuel-efficient method of generating power, called Central Power, at tactical operations centers (command posts) for the Army’s 4th Infantry Division. Previously, power for these operations centers was provided by many small generator sets that had a large logistics footprint; required considerable fuel, maintenance, and personnel to operate; and were subject to disruptions in continuous power. The Central Power concept uses fewer, larger generators to provide independent “islands” of power generation, decreasing fuel consumption and the logistics footprint. According to Project-Manager Mobile Electric Power officials, Central Power saved the 4th Infantry Division roughly $384,000 during its first year in use. The officials said that they have plans to field Central Power to all active Army components by the end of fiscal year 2009 but noted that the Army had not yet updated its equipment requirements list for units to include Central Power as a requirement. Finally, the Project Manager-Mobile Electric Power office is pursuing a $30 million, 6-year program to develop a future generation intelligent management grid architecture, called Hybrid Intelligent Power (HI- Power). HI-Power is intended to provide a flexible, grid-based architecture that enables “plug-and-play” power generation using a variety of power sources, including military and commercial generators, vehicles, and renewable energy sources such as wind and solar. According to the officials, HI-Power will automatically start and stop generators to match demand and store energy for transient power requirements. The intended benefits include reduced fuel consumption (by 17 to 40 percent depending on the scenario), maintenance, personnel requirements, and power interruptions. The officials are planning for HI-Power to go into production during fiscal year 2013. Several military services are exploring the use of alternative and renewable energy technologies to generate power at forward-deployed locations and reduce the demand for petroleum-based fuel. The Air Force’s Air Force Research Laboratory created the Renewable Energy Tent City—a collection of various deployable shelters powered by solar and fuel cell generators situated at Tyndall Air Force Base, Florida. (See fig. 4.) The purpose of this research is to evaluate renewable energy technologies for use at forward-deployed locations, according to an Air Force Research Laboratory official. The official told us that the laboratory is assessing, among other technologies, a solar-integrated cover that it developed to generate power for small shelter systems using a form of solar cell technology. The Air Force is also engaged in other research and development projects involving the use of fuel cell, biofuel, and other alternative and renewable energy technologies. The Marine Corps has several research initiatives under way to develop alternative energy systems for forward-deployed locations. For example, the Marine Corps Systems Command is working on the Deployable Renewable Energy Alternative Module. This module is intended to be towed by a vehicle and is designed to be used at forward-deployed locations to temporarily power radios or computers until fuel can be resupplied to the locations by employing solar, wind turbine, battery, and generator technologies. (See fig. 5.) A Marine Corps Systems Command official told us that three prototypes have been designed using variations of these alternative technologies and that two have been tested. The official noted, however, that the technologies used for the modules have limitations. For example, vendors for this project developed solar cells that either required large surface areas—equal to the size of a tennis court—to recharge batteries or were too fragile for use in an austere environment. Furthermore, according to the official, the prototypes were not cost effective, since comparable diesel-powered generators with a 30- day supply of fuel could be procured and transported for considerably less. For these reasons, the modules are not likely to be deployed in the field, but the official said that lessons learned from this project could be used to inform the development of future energy systems. The Army is also investigating ways of reducing fuel demand at forward- deployed locations through various research initiatives. For example, the Army Research Laboratory is working with universities and private sector firms to develop a processor that converts tires into energy and recyclable products that can be used at forward-deployed locations. (See fig. 6.) The scrap tire recycling process produces diesel, gas, carbon char, and steel— byproducts that can either be used to power generators, boilers, and other items or recycled into products such as asphalt and paint. Project partners estimated that 2.7 million gallons of diesel fuel per year could be produced from a tire recycling facility operating at a forward-deployed location in Iraq, thereby reducing the number of trucks needed to deliver fuel. The fuel produced from this process is currently being tested to determine if it meets Army standards. In addition, the Army is providing support to the Project Manager-Mobile Electric Power office for the development of the Hi-Power program, and to the Power Surety Task Force for the development of a refinery system designed to convert trash into energy. Using a similar form of technology, the Army, working with the Defense Energy Support Center, intends to demonstrate six waste-to-fuel (diesel fuel) plants at six U.S. Army locations over the next year, according to a senior Army official. During our visits to forward-deployed locations in Kuwait and Djibouti, we found some local efforts by camp officials to reduce fuel demand. In Kuwait, an official at Camp Arifjan shared plans to consolidate loads on small generators by creating groupings—or networks—of multiple generators, which could improve overall efficiency and reduce the number of generators that operate at most times of the year. Camp officials said that they would like to build a centralized power plant for the location’s communication complex by 2010. An Army official also told us that foam insulation was being used to cool tent structures in Kuwait, where outside temperatures can exceed 120 degrees Fahrenheit. According to the official, insulated facilities require fewer ventilation and air conditioning units to maintain cooler temperatures, thereby providing 20 to 40 percent in energy savings and reducing the wear and tear on the camp’s generator fleet. As of June 2008, the Army was in the process of insulating tents with foam at Camp Buehring, Kuwait, and had plans to insulate tents at Camp Virginia, Kuwait. The official also told us that other alternatives to increase cooling efficiency, such as the use of special ceramic paints, were being explored. During our visit to Camp Lemonier, Djibouti, Navy officials told us that they allowed the Power Surety Task Force to apply foam insulation on the tent exterior of the camp’s gymnasium in August 2007. According to camp officials, the temperature inside the air-conditioned tent decreased from 95-100 degrees Fahrenheit to about 72 degrees after the foam insulation was applied. The officials also said that they were able to remove two of the five air conditioning units used to cool the facility, resulting in an estimated fuel savings of 40 percent. The officials were pleased with the reduced fuel demand and improved quality of life that were produced as a result of the demonstration. However, they noted that the disadvantages of the foam include more material to dispose of when the tent is disassembled and difficulty in moving or rearranging the tent after the foam was applied. In addition to the foaming, Camp Lemonier officials had also posted signs aimed at modifying the behavior of personnel to conserve energy. The signs included tips on taking shorter showers, using less air conditioning, and unplugging transformers when not in use. Camp officials had also developed an energy savings plan to reduce electrical consumption. Although DOD is undertaking a number of initiatives focused on reducing fuel consumption, it lacks an effective approach for implementing fuel reduction initiatives and maintaining sustained attention to fuel demand management at its forward-deployed locations. DOD has stated that it needs to reduce its dependence on petroleum-based fuel and the logistics “footprint” of its military forces, as well as reduce operating costs associated with high fuel usage. In 2008, the Under Secretary of Defense for Acquisition, Technology and Logistics reported to Congress that fuel demand for operations in Iraq and Afghanistan is higher than any war in history and that protecting large fuel convoys imposes a high burden on combat forces. The Under Secretary’s report noted that reducing fuel demand would move the department toward a more efficient force structure by enabling more combat forces to be supported by fewer logistics assets, reducing operating costs, and mitigating the budget effects caused by fuel price volatility. However, we found that DOD’s current approach to managing fuel demand at forward-deployed locations is not effective because it lacks (1) guidance directing locations to address fuel demand, (2) incentives and a viable funding mechanism to invest in fuel reduction initiatives, and (3) visibility and accountability within the chain of command for achieving fuel reduction. Until DOD addresses these shortcomings and makes fuel demand management a higher priority, DOD will face difficulty achieving its goals of reducing its reliance on petroleum-based fuel, the associated logistics burden, and fuel costs. DOD generally lacks guidance that directs forward-deployed locations to manage and reduce fuel demand and thus cannot ensure that base commanders and their staffs will give sustained attention to this issue among their many other mission requirements. In contrast, DOD is driven to reduce energy consumption at its U.S. installations largely by federal mandates and DOD guidance. For example, the Energy Policy Act of 2005, Executive Order 13423, and the Energy Independence and Security Act of 2007 set energy reduction goals for federal buildings within the United States. Moreover, in November 2005, the Under Secretary of Defense for Acquisition, Technology and Logistics issued an instruction that provided guidance, assigned responsibilities, and prescribed procedures for energy management at its U.S. installations. The instruction addresses topics such as ensuring sufficient funds to meet energy goals, tracking and measuring progress and energy efficiency improvement, reporting energy use and accomplishments, and training facility managers on the energy efficient operation of facilities. Among other responsibilities, it requires the heads of DOD components to develop programs that result in facilities that are designed, constructed, operated, and maintained to maximize energy efficiency. However, DOD, Central Command, and military service officials we met with could not identify similar DOD policies, directives, or other documents that specifically require attention to fuel demand management at forward-deployed locations, and we were told that it is not a key consideration. In responding to a draft of this report, DOD stated that the Army had issued an energy security implementation strategy in January 2009 that provides overarching guidance for reducing energy consumption at forward locations. Our analysis of the Army’s new strategy found that it emphasizes the need to reduce energy consumption, including at forward-deployed locations, but it does not provide specific guidance that directs forward-deployed locations to manage and reduce fuel demand. Instead, the strategy tasks offices of primary responsibility to develop and execute implementation plans that include activities to achieve the Army’s energy security goals. While some of DOD’s combatant commands and military services have developed construction standards for forward-deployed locations, our analysis showed that this existing guidance is largely silent with regard to fuel demand management and energy efficiency. Pertinent guidance for Central Command, as well as Army guidance used by Central Command and European Command, revealed only one reference to energy efficiency—that is, semipermanent locations are to be designed and constructed with finishes, materials, and systems selected for moderate energy efficiency. According to the guidance, semipermanent construction standards will be considered for operations that are expected to last more than 2 years. Army construction guidance that Southern Command uses for its forward-deployed locations does not address fuel demand management or energy efficiency. A Southern Command official told us that the command is in the process of developing new guidance on construction standards similar to Central Command’s guidance. Pacific Command officials told us that they were unaware of guidance on constructing and maintaining forward-deployed locations within their area of responsibility. Within Central Command, the temporary status of many forward-deployed locations has limited the emphasis on energy efficiency. Army Corps of Engineers officials said that the concern about maintaining a temporary presence, particularly in Iraq, limits the type of materials and equipment they are authorized to bring to forward-deployed locations and presents a challenge for creating energy efficiencies. The officials noted that, in practice, the Army—which has the most expertise in establishing forward- deployed locations in austere environments—does not typically use materials designed for operations lasting longer than 6 months at locations supporting current operations. Similarly, when we visited Camp Lemonier in June 2008, Navy officials told us that their camp’s “expeditionary” status hindered their ability to make construction upgrades to the camp, though we observed that Camp Lemonier had been under DOD’s control for about 6 years at that time. In addition, the expedited nature of setting up forward-deployed locations limits emphasis on energy efficiency. For example, Army Corps of Engineers officials said that the approach to establishing forward- deployed locations in support of current military operations in Iraq and Afghanistan has been to start out with an austere set-up and build up the locations as needed. Because it is unknown how long a location might be in existence, the Army’s initial focus is on establishment, not on sustainment. In general, after the combatant commander determines a need for a forward-deployed location and the requirement is relayed to the Army, the Army moves quickly to deploy with prepackaged kits of equipment. According to the officials, energy efficiency is not a consideration at this point, and in fact, because the goal is to set up a location quickly, establishing a forward-deployed location can be energy intensive. Though the process may vary depending on military service and the specific circumstances, including mission requirements, officials from the other military services described a similar process for how they establish forward-deployed locations. An Air Force official told us that because its service guidance does not explicitly address energy efficiency, it allows a certain amount of freedom and flexibility for the engineers, who “implicitly” incorporate energy efficiency into their planning. A Marine Corps headquarters official told us that it is difficult to address fuel demand at forward-deployed locations because, as an expeditionary force, the Marine Corps does not expect to maintain a long-term presence at forward-deployed locations. Our review of the Navy’s guidance on advanced basing also revealed no mention of fuel demand management. A Navy official involved with equipment logistics told us that his service has not been directed to examine energy efficiency while outfitting or procuring products for forward-deployed locations and that the Navy often resides at locations operated by other military services. Similarly, we found a lack of attention to fuel demand in guidance, including an absence of fuel usage guidelines and metrics to evaluate progress of reduction efforts, as forward-deployed locations are sustained over time. The guidance officials identified for us generally does not address fuel demand management in sustaining locations even after they have been in existence for a certain period of time. For example, while the Air Force has issued guidance on vehicle management, which includes a goal to replace 30 percent or more of all applicable light duty vehicles with more fuel-efficient, low-speed vehicles by fiscal year 2010, Air Force officials did not provide us specific guidance on overall fuel demand management at forward-deployed locations. The general lack of military service guidance on this issue makes it difficult to ensure the continuity of fuel reduction efforts at individual locations. For example, during our visit to Camp Arifjan, an official told us that the camp’s public works department was considering efficiency ratings, when possible, during the installation of heating and cooling systems in new or upgraded facilities in Kuwait, but he was unaware of DOD guidance requiring forward-deployed locations to address fuel demand. Moreover, officials we spoke with at Camp Lemonier said that they intended to implement a Navy instruction on energy management at their location even though it only applies to non- nuclear ships, aircraft, vehicles, and shore installations. While we found that both camps we visited were pursuing efforts to reduce fuel demand, the efforts were driven largely by individual officers with short tours of duty (typically 12 months or less). Without guidance and metrics that require forward-deployed locations to address fuel demand, DOD cannot ensure that fuel reduction actions taken at specific locations will be continued over time, as personnel and mission requirements change. In addition to construction and maintenance, the procurement of products for forward-deployed locations presents opportunities for DOD to consider making purchases that take into account fuel demand or energy efficiencies when practical. DOD’s guidance for its U.S. or permanent installations requires the selection of energy-efficient products when they are life-cycle cost effective, and the Under Secretary of Defense for Acquisition, Technology and Logistics has established the DOD Green Procurement Program that strives to meet the requirements of federal green procurement preference programs. However, we did not find a similar emphasis on procuring energy-efficient products at DOD’s forward- deployed locations. Moreover, while the Energy Policy Act of 2005 requires federal agencies to procure Energy Star products or Federal Energy Management Program-designated products, except in cases when they are not life-cycle cost effective or reasonably available to meet agency requirements, the law does not apply to any energy-consuming product or system designed or procured for combat or combat-related missions. Officials from each of the military services indicated that there were unaware of efforts to procure energy-efficient products for forward- deployed locations. Instead, they told us that other factors, such as mission requirements, availability from local economies, or cost played a larger role in procurement decisions. Moreover, military services often gather together available equipment for forward-deployed locations from prepackaged sets, units, or the region, with little attention to energy efficiency. For example, Army Corps of Engineers officials told us that the Army often deploys with disparate generators that may be old and energy inefficient. While DOD guidance requires components to obtain approval to procure nonstandard generators, an official from DOD’s Project Manager-Mobile Electric Power office admitted that it is difficult to enforce the requirement at forward-deployed locations. He also noted that the military services’ lists of required equipment may not contain enough power generation and distribution equipment to support current operations. Thus, we found that the military services turn to a variety of sources to find enough generators and other products to meet mission requirements, often without regard to energy efficiency. There are some difficulties with procuring energy-efficient products for forward-deployed locations. From the Defense Logistics Agency’s perspective, for example, these include limited product availability, logistics associated with transport to remote locations, and the compatibility of products with locally available energy sources, if used; or in the case of a solar-powered system, for instance, availability of on-site technical expertise for installing such a system. However, given DOD’s high fuel demand for base support activities at its forward-deployed locations, without guidance in place to incorporate energy efficiency considerations into procurement decisions when practical, DOD may be missing opportunities to make significant reductions in demand without affecting operational capabilities. In a separate effort, the Joint Staff is in the process of developing common living standards (referred to as “joint standards of life support”) for military servicemembers at forward-deployed locations, which could provide another opportunity to make decisions that take into account fuel demand considerations. Joint Staff officials said that the effort is intended to create a consistent level of habitability for servicemembers through the establishment of standard square footage requirements for living space, duration of showers, and so forth that would be applied at forward- deployed locations after 45 days of establishment. The officials described the effort as a long-term initiative intended to inform acquisitions. Once initial standards are approved by the department, a DOD memorandum would require the acquisition of common items, such as military tents, based on the standards. At the time we completed our audit work, the Joint Staff had proposed an initial set of six standards—pertaining to field billeting, showers, laundry facilities, latrines, ice, and feeding—and had requested that a Senior Warfighting Forum be convened within DOD to review the initial standards. While officials told us that the Joint Staff has not included fuel demand considerations to date, we found that the types of standards the Joint Staff is developing have implications for fuel demand. For example, the duration of showers relates to how much fuel is required for hot water heaters. Moreover, the officials said that there may be opportunities in the future to develop standards for items such as military tents, generators, and other equipment used at forward-deployed locations. Thus, the effort provides an opportunity to integrate fuel demand considerations that could lead to long-term, departmentwide energy efficiencies. DOD has not established incentives or a viable funding mechanism for fuel reduction projects at its forward-deployed locations, which does not encourage commanders to identify fuel demand management as a priority. DOD does not provide incentives to commanders to encourage fuel demand reduction at forward-deployed locations. By contrast, DOD emphasizes and encourages energy reduction efforts at its U.S. installations. A November 2005 instruction issued by the Under Secretary of Defense for Acquisition, Technology and Logistics requires the heads of DOD components to develop internal energy awareness programs to publicize energy conservation goals, disseminate information on energy matters and energy conservation techniques, emphasize energy conservation at all command levels and relate energy conservation to operational readiness, and promote energy efficiency awards and recognition through the use of incentives. The instruction also requires training and education for achieving and sustaining energy-efficient operations at the installation level through venues such as technical courses, seminars, conferences, software, videos, and certifications. The Navy has also established an energy conservation program, which has an award component, to encourage ships to reduce energy consumption. Awards are given quarterly to ships that use less than the Navy’s established baseline amount of fuel, and fuel savings achieved during the quarter are reallocated to ships for the purchase of items such as paint, coveralls, and firefighting gear. We previously reported that the ship energy conservation program receives $4 million in funding annually, and Navy officials told us that they achieved $124.6 million in cost avoidance in fiscal year 2006. They said that some other benefits of the ship energy conservation program include more available steaming hours, additional training for ships, improved ship performance, reduced ship maintenance, and conservation of resources. However, neither the Navy nor the other military services have established similar incentive programs for their forward-deployed locations. Instead, officials throughout the department consistently said that the amount of fuel forward-deployed locations consumed was related to mission requirements. We found that the lack of incentives tends to discourage commanders from pursuing projects that could reduce fuel demand. During our visit to Camp Lemonier, for example, we noted that officials had identified several projects that could reduce the camp’s fuel usage, including a proposal to right-size air conditioning units in living units by replacing the current 2-ton units (24,000 BTUs) with 1-ton units and applying foam insulation to the rooftops of several buildings. However, camp officials questioned whether the upfront costs of these projects made them worth undertaking because there was no apparent “return on investment” for the camp, which would not see the associated savings to invest in other camp projects. Similarly, while officials at Camp Arifjan provided several examples of projects that could increase fuel efficiency at Kuwait locations, without incentives to pursue these projects, it is unclear whether they would take priority over other initiatives. While DOD is driven to reduce energy consumption at its U.S. installations largely by federal mandates and DOD guidance, encouraging fuel demand reduction at forward-deployed locations will likely require a culture change for DOD. According to OSD and DOD Power Surety Task Force officials, the department has viewed fuel as a commodity necessary to meet its mission requirements because, historically, fuel has been inexpensive and free flowing for the department. However, from the perspective of the Power Surety Task Force, if commanders could reallocate funds saved through fuel reduction to other initiatives, DOD could be in a position to significantly reduce fuel demand at their locations. Given DOD’s view of the Global War on Terrorism as a “longer war,” forward-deployed locations such as Camp Lemonier, Camp Arifjan, and others could remain in existence for the foreseeable future and future conflicts will likely require the department to establish new locations. DOD recognizes the risks associated with its heavy fuel burden, but without incentives, commanding officials at DOD’s forward-deployed locations are unlikely to identify fuel reduction as a priority in which to invest their resources. DOD also has not developed a viable funding mechanism for fuel reduction projects at its forward-deployed locations. This makes it difficult for commanders to pursue projects that would reduce fuel demand, even though such projects could lower costs and, in some cases, risks associated with fuel delivery. A lack of a viable funding mechanism is an obstacle for locations supporting current operations, which are largely dependent on supplemental congressional appropriations. Since September 2001, a large portion of funding for military operations in support of the Global War on Terrorism has come through supplemental appropriations, which are requested by the department and approved by Congress separately from DOD’s annual appropriation. At the time of our visit, Camp Lemonier relied completely on supplemental appropriations for its base support activities, and officials told us delays in receiving these funds presented challenges in covering existing costs, making it particularly difficult to pursue more expensive fuel demand reduction projects. Camp Arifjan was also heavily reliant on supplemental appropriations associated with the Global War on Terrorism. However, because about 40 percent of the camp was funded through a defense cooperative agreement with Kuwait, it also depended on the host country for resources. We have previously reported that past DOD emergency funding requests have generally been used to support the initial or unexpected costs of contingency operations. Once a limited and partial projection of costs could be made, past administrations have generally requested further funding in DOD’s base budget requests. We have encouraged DOD to include known or likely projected costs of ongoing operations related to the Global War on Terrorism with DOD’s base budget requests. However, current administration policy is that the costs of ongoing military operations in support of the Global War on Terrorism, such as Operation Enduring Freedom and Operation Iraqi Freedom, should be requested as emergency funding. A senior Air Force official noted that, from his perspective, forward-deployed locations dependent on this type of emergency funding do not have to worry about reducing energy costs as DOD’s permanent installations do because the commanders of forward- deployed locations know they will receive supplemental appropriations to cover their costs. Our discussions with Army and DOD Power Surety Task Force officials about construction and maintenance of forward-deployed locations revealed that, from their perspectives, other funding restrictions also pose challenges in addressing fuel demands. DOD is appropriated funds for certain activities such as operation and maintenance, military construction, and other procurement. Operation and maintenance funds are used for minor construction spending, and such projects are limited to $750,000 or less. In using operation and maintenance funds, the department and military services are also restricted by law from purchasing any investment item that has a unit cost greater than $250,000. The officials told us that, from their perspective, this restriction can result in energy inefficiencies. For example, the Army typically deploys with several smaller, less expensive and energy-inefficient generators. The officials said that, ideally, the Army would like to deploy with a larger, energy-efficient generator that exceeds the funding limit but could produce savings over the long term. The military services can seek approval for projects in excess of the limit, but projects compete with other priorities. An official with DOD’s Power Surety Task Force told us that officials at Camp Victory in Iraq recently requested funding to consolidate generators, which would result in greater fuel efficiency, but were denied due to other priorities. The department manages an Energy Conservation Investment Program that provides congressionally-appropriated military construction funds for projects that save energy or reduce defense energy costs for its existing installations but has no similar program specifically for forward-deployed locations. Through the program, the military services and defense agencies may submit projects for funding consideration based on a 10-year or less savings payback. Funds accrued through project savings may be used on projects that have experienced cost growth, for the design of energy conservation investment program projects, to supplement the funding of future or prior-year program projects, or for additional program projects. In fiscal year 2007, the Energy Conservation Investment Program provided over $54 million for 48 projects. Projects at all but five locations—Naval Support Activity Souda Bay, Greece; Kadena Air Base, Japan; Fort Buchanan, Puerto Rico; Ramstein Air Base, Germany; and a Defense Commissary project in Guam—were located within the United States. DOD also uses energy savings performance contracts (ESPC) at several of its U.S. installations. Under an ESPC, DOD enters into a long-term contract (up to 25 years) with a private energy services company whereby the company makes energy-efficiency improvements financed from private funds. DOD then repays the company over a specified period of time until the improvements have been completely paid off. We previously reported that DOD had undertaken 153 ESPCs to finance about $1.8 million in costs at about 100 military installations from fiscal years 1999 through 2003. Moreover, the Army Corps of Engineers reported that from 1998 to March 2008, its Huntsville Center had awarded ESPC contracts that have resulted in $420 million in contractor-financed infrastructure improvements on Army installations and a total projected energy cost savings to the government of $100 million. At the time of our review, the DOD Power Surety Task Force was investigating the feasibility of establishing an energy dividend reinvestment program to fund DOD energy projects across the department. According to the officials, the program would be structured similarly to an ESPC whereby an installation commander or program manager could submit a project for funding consideration. If, after analysis and review, funding was provided to pursue the project, the installation or program would then repay the program using savings achieved by the resulting energy efficiencies. While an initial briefing prepared for DOD’s Energy Security Task Force and other department stakeholders noted all energy projects within DOD could be eligible under the program, including those at forward-deployed locations, the officials told us that this is unlikely because, like ESPCs, the program would rely on long-term contracts. DOD’s forward-deployed locations might not be in existence for long periods of time, and therefore, the program might not be able to recoup savings for projects funded at these locations. Moreover, the officials expressed concern that if an installation or program incurred higher energy costs than anticipated, it might not achieve its projected savings and might not be in a position to repay DOD. DOD’s Power Surety Task Force found that the source of funding for large fuel demand reduction projects, such as foaming tents at forward- deployed locations in Afghanistan, has been a challenge for the department, noting that energy efficiency does not fit neatly into the military services’ budget processes. While the military services’ budget processes allow them to budget for operation and maintenance costs, research and development efforts, and so forth, the processes prevent DOD from making quick, upfront investments in energy-efficiency projects. Conversely, the officials told us that DOD’s budget process in effect discourages commanders from generating savings by reducing their future budgets—a limitation also cited by officials during our visit to Camp Lemonier. In 2003, our work highlighted a similar funding problem concerning corrosion mitigation projects. We found that DOD and the military services gave corrosion mitigation projects, whose benefits may not be apparent for many years, a lower priority than other requirements that showed immediate results. In response to a subsequent Senate Armed Services Committee report, DOD established a specific, separate budget line for corrosion prevention activities to help ensure that sustained and adequate funding is available for the corrosion control projects that have the best potential to provide maximum benefit across the department. This serves as one example for the department in considering how best to fund fuel reduction projects at forward-deployed locations. Without establishing a viable funding mechanism for these projects, DOD is not well-positioned to achieve fuel savings at its forward-deployed locations. While DOD and the military services have efforts under way to reduce fuel demand at forward-deployed locations, DOD’s current organizational framework does not provide the department visibility or accountability over fuel demand issues at its forward-deployed locations. We found that fuel reduction efforts are not consistently shared among locations, military services, or across the department and that there is no one office or official specifically responsible for fuel demand management at forward- deployed locations. Officials we spoke with from each of the military services told us that fuel demand reduction practices at forward-deployed locations were not consistently shared. For example, Army Corps of Engineers officials said that no formal system is in place specifically designed to share fuel demand reduction practices. Informal conversations occur, though on an ad hoc basis. They acknowledged that forward-deployed locations often pursue different initiatives; and the department, other military services, or other Army forward-deployed locations are often unaware of these different initiatives. Air Force officials also said that their service does not have visibility over fuel demand reduction practices that may occur at forward-deployed locations, noting that with joint operations and Air Force forces embedded with the Army, fuel consumption is not systemically recorded. Officials from the Navy and Marine Corps were also unable to provide examples where fuel demand reduction practices were shared across locations. Moreover, while DOD guidance sets forth principles and establishes doctrine for bulk petroleum and water in support of military operations, it does not designate any DOD office or official as being responsible for fuel demand management at forward-deployed locations. As table 1 shows, several different offices have responsibility for petroleum management, but none is specifically accountable for fuel demand management at forward-deployed locations. In addition, we could not identify anyone who is specifically accountable for fuel demand management through our interviews with various DOD and military service offices. While the DOD Power Surety Task Force has been serving as a liaison on energy issues between the combatant commands and military services, its temporary status and resources limit its effectiveness. Moreover, because the Power Surety Task Force staff is made up of contactors, OSD recognizes that the Power Surety Task Force cannot represent the department. Defense Energy Support Center officials told us that DOD needs to create an energy office to oversee fuel demand reduction efforts and develop policy for the department. The Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 requires DOD to establish a Director of Operational Energy Plans and Programs, an operational energy strategy for DOD, and military department-level energy officials. The military departments have established senior energy officials, but DOD has not yet established a director or strategy for operational energy. In meeting these requirements, DOD has an opportunity to improve visibility and accountability by assigning responsibility and emphasize fuel demand management at forward-deployed locations at both the department and military service levels. An OSD official involved with the DOD Energy Security Task Force noted that operational energy options should be a high priority of the new director. Without establishing visibility and accountability over fuel demand management at forward-deployed locations, DOD is not well positioned to address the shortcomings we have identified in this report— including the lack of fuel reduction guidance, incentives, and a viable funding mechanism for initiatives to decrease demand. Thus, DOD cannot be assured that good fuel reduction practices are identified, shared, prioritized, resourced, implemented, and institutionalized across locations in order to reduce the costs and risks associated with high fuel demand. DOD faces high costs, operational vulnerabilities, and logistical burdens in sustaining forward-deployed locations that depend heavily on fuel-based generators. Moreover, current operations have resulted in DOD maintaining some forward-deployed locations for longer than initially anticipated and generally without regard to fuel demand. While DOD’s future operations may be unknown, the department’s goals to reduce its reliance on petroleum-based fuel and minimize its logistics footprint— coupled with its reexamination of its overseas posture to better respond to the changing threat environment—underscore the importance of increasing attention on fuel demand management at overseas locations where U.S. forces are stationed. Although base commanders must place their highest priority on meeting mission requirements and it may not be practical for DOD to decrease fuel usage at every forward-deployed location, particularly at those that might not be in existence for very long, fuel demand is likely to remain high until the department gives systematic consideration to incorporating fuel demand in construction, maintenance, procurement, and other policy decisions for forward-deployed locations. In addition, the department will not be in a position to effectively identify, share, prioritize, resource, implement, or institutionalize good fuel reduction practices across locations that may exist for longer periods of time. By placing a higher priority on fuel reduction at forward-deployed locations and developing a comprehensive and coordinated approach to managing fuel demand, one that includes specific guidelines, incentives, a viable funding mechanism, visibility, and accountability, DOD would be more likely to achieve its goals of reducing its reliance on petroleum-based fuel, the vulnerabilities and logistics burden associated with transporting large amounts of fuel to forward-deployed locations, and operational costs. To establish an effective approach to managing fuel demand that would facilitate the widespread implementation of fuel reduction initiatives and sustained attention to fuel demand issues at its forward-deployed locations, we recommend that the Secretary of Defense take the following five actions. 1. Direct the combatant commanders, in consultation with their military service component commands, to establish requirements for managing fuel demand at forward-deployed locations within their areas of responsibility and provide specific guidelines as appropriate. Officials may wish to consider identifying a triggering mechanism in the guidance, such as a specific length of time after a location is established, when fuel demand management should become a consideration in forward-deployed location sustainability. In establishing requirements, the combatant commanders should coordinate their efforts with the new DOD Director of Operational Energy Plans and Programs to ensure departmentwide communication and consistency, where appropriate. 2. Direct the Secretaries of the Army, the Navy, the Air Force, and the Commandant of the Marine Corps to develop guidance that implements combatant command requirements for managing fuel demand at forward-deployed locations. The guidance should include specific guidelines that address energy-efficiency considerations in base construction, maintenance, procurement, and policies regarding fuel usage at a location. In establishing guidance, the military services should coordinate their efforts with the new DOD Director of Operational Energy Plans and Programs to ensure departmentwide communication and consistency, where appropriate. 3. Direct the Chairman, Joint Chiefs of Staff, to require that fuel demand considerations be incorporated into the Joint Staff’s initiative to develop joint standards of life support at DOD’s forward-deployed locations. 4. Designate the new, congressionally-mandated DOD Director of Operational Energy Plans and Programs as the department’s lead proponent of fuel demand management at forward-deployed locations, and through this designation, require that the director develop action plans as part of the congressionally-mandated DOD energy strategy. Specifically, the strategy should incorporate the department’s action plans for facilitating departmentwide communication and consistency, when appropriate, in the development or revision of combatant command and military service guidance that establishes requirements and provides guidelines for managing fuel demand at forward-deployed locations; establishing incentives for commanders of forward-deployed locations to promote fuel demand reduction at their locations, as well as identifying a viable funding mechanism for the department and commanders of forward-deployed locations to pursue fuel reduction initiatives; establishing visibility over fuel demand management at forward- deployed locations, including plans for sharing good fuel reduction practices and solutions to identified challenges; and establishing accountability for fuel demand management at appropriate levels across the department. 5. Direct the Departments of the Army, Navy, and Air Force to assign their senior energy officials, among their other duties, responsibility for overseeing fuel demand management at forward-deployed locations operated by their military department components. In carrying out this responsibility, the officials should identify and promote sharing of good fuel reduction practices and solutions to identified fuel demand challenges at their component’s forward- deployed locations and communicate those practices and solutions to the DOD Director of Operational Energy Plans and Programs for potential use across the department. In its written comments on a draft of this report, DOD generally concurred with all of our recommendations. Technical comments were provided separately and incorporated as appropriate. The department’s written comments are reprinted in appendix IV. In response to our recommendation that DOD direct the combatant commanders to establish requirements for managing fuel demand at forward-deployed locations in coordination with the new DOD Director of Operational Energy Plans and Programs, DOD partially concurred, stating that it believes the combatant commanders must be the decision authorities for when reduction efforts should begin to be tracked and what conservation measures are employed, in order to avoid distraction from tactical operations. While we agree that the combatant commanders should be responsible for establishing requirements for managing fuel demand at their forward-deployed locations, it is important that this effort be coordinated with the new DOD director of operational energy as well as with the service component commands. Our report recommends that DOD designate the new director of operational energy as the lead proponent of fuel demand management at forward-deployed locations, and through this designation, facilitate departmentwide communication and consistency of requirements and guidelines for managing fuel demand, as well as establish visibility and accountability for fuel demand management. DOD generally concurred with our recommendations pertaining to the new director’s responsibilities. In order to effectively carry out these responsibilities, attain visibility over fuel demand issues across the department, and serve as the DOD official accountable for such issues, the director of operational energy should be consulted by the combatant commanders in establishing fuel demand management requirements to ensure departmentwide communication and consistency occurs where appropriate. DOD concurred with our recommendation that the secretaries of the military services develop guidance that implements the combatant command requirements for managing fuel demand and include specific guidelines that address energy efficiency considerations. In its response, the department stated that guidelines on policy will be general in nature and allow combatant commands flexibility. While we believe that forward- deployed locations within different regions could require different guidelines, our audit work revealed that current guidance for Central Command and Army guidance used in Central Command and European Command contain only a general reference to energy efficiency—that semipermanent locations are to be designed and constructed with finishes, materials, and systems selected for moderate energy efficiency—and that this guidance is not effective in implementing fuel demand considerations at forward-deployed locations. Our report concludes that fuel demand is likely to remain high until DOD gives systematic consideration to incorporating fuel demand management into construction, maintenance, procurement, and other policy decisions for forward-deployed locations. Therefore, we continue to believe that the military service guidelines on fuel demand management should provide enough specificity to appropriately address these issues so that DOD can achieve its goals of reducing its fuel demand, logistics burden, and operational costs. As noted, DOD generally concurred with our recommendations on the responsibilities of the new director of operational energy. However, regarding the need to establish a viable funding mechanism for fuel reduction projects at forward-deployed locations, the department stated that it is not convinced that financial incentives represent the best fuel reduction strategy for forward-deployed locations. We recognize that DOD has various options for providing incentives to commanders at forward- deployed locations to reduce fuel demand but continue to believe that, based on our audit work, the availability of funding for such projects is a concern that needs to be addressed. DOD concurred with our other recommendations that the Joint Staff incorporate fuel demand considerations into its initiative to develop joint standards for life support at DOD’s forward-deployed locations and that the military department senior operational energy officials be assigned responsibility for oversight of fuel demand management at forward- deployed locations operated by their military service component commands. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Defense; the Deputy Secretary of Defense; the Chairman of the Joint Chiefs of Staff; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Secretaries of the Army, Navy, and Air Force; and the Commandant of the Marine Corps. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to the report are listed in appendix V. This appendix illustrates the Department of Defense’s (DOD) fuel demand at selected forward-deployed locations for a 1-month period during 2008. We obtained and analyzed fuel receipts and consumption information, by fuel type (jet fuel, diesel, and mobility gasoline), for each day in June 2008 from five forward-deployed locations that were in Central Command’s area of responsibility and heavily reliant on fuel-consuming generators for power. The locations, which were selected in consultation with Central Command officials, were: Camp Lemonier, Djibouti, Qayyarah West (Q-West) Air Base, Iraq, Camp Arifjan, Kuwait, Contingency Operating Base (COB) Adder, Iraq, and Bagram Air Field, Afghanistan The information presents a snapshot in time and cannot be generalized to other time periods or forward-deployed locations. One limitation of the data involves how locations classified fuel consumed for either base support activities (defined as power, heating/cooling, facilities, communications) or air and ground operations (the later defined as vehicles). Although we provided the locations with examples of base support activities and air and ground operations, these categories can encompass a wide range of interrelated or overlapping activities. Therefore, we deferred to the discretion of location officials in how they classified their fuel use activities. Another limitation involves how the data were collected. Data collection procedures and systems varied by military service component and location; however, we found that each location used a quality assurance process to ensure that the data were accurate and complete. Therefore, we concluded that the data were sufficiently reliable for descriptive purposes. For more information on our scope and methodology, see appendix II. The locations we reviewed reported consuming a total of approximately 11.67 million gallons during June 2008 for base support activities, including power for heating/cooling units, machinery, and lighting; and for air and ground operations, including aircraft, armored vehicles, and other forms of transport. Table 2 and figure 7 summarize these fuel consumption data. As shown in table 2, of the overall amount of fuel consumed by the five locations during this 1-month period, more than 4 million gallons (35 percent) were consumed for base support activities. In comparison, the same amount of fuel could be used to fill 71 Boeing 747 jet airliners. Base support activities accounted for over 70 percent of total fuel consumption for three of the locations in our review—Q-West Air Base, Camp Arifjan, and COB Adder. COB Adder consumed the largest amount of fuel for base support activities in June at 1.17 million gallons. Bagram Air Field reported that 13 percent of its fuel consumption in this month was for base support activities—a proportion that was lower than that of the other bases. However, the 6.16 million gallons that Bagram Air Field consumed for air and ground operations in June was more than four times the amount of fuel consumed for air and ground operations by the other four locations combined. Figures 8 through 12 provide a brief description of the mission, power structure, and June 2008 fuel demand for each of the five forward- deployed locations. Each profile also includes a chart showing the proportion of total fuel consumed for base support activities and air and ground operations. We conducted our work at the Office of the Secretary of Defense (OSD); the Joint Staff; the headquarters and select components of the Army, Air Force, Navy, and Marine Corps; the Defense Logistics Agency, including the Defense Energy Support Center; and the Power Surety Task Force. To review DOD efforts to reduce fuel demand at forward-deployed locations, we reviewed DOD component documents describing efforts and met with DOD and military service officials to identify and discuss the intent, scope, and status of these efforts. Our review focused on forward- deployed locations—installations or base camps situated outside of the United States that support current operations—that rely primarily on fuel- based generators, as opposed to local power grids. We supplemented our analysis with visits to Camp Arifjan, Kuwait, and Camp Lemonier, Djibouti, where we observed efforts made at the locations and discussed them with cognizant officials. After consultation with Central Command officials, we selected and visited these two forward-deployed locations to gain a firsthand understanding of fuel demand issues at these locations. We chose to visit these locations because servicemembers at each location relied heavily on fuel-based generators, as opposed to local power grids, to carry out very different missions—the former directly supported operations in Iraq while the latter provided diplomatic, development, and counterterrorism support within the Horn of Africa. We also chose these locations because officials told us that the camps were pursuing fuel demand reduction efforts; for example, Camp Lemonier had applied foam insulation to a facility to reduce fuel demand. We treated these two locations as illustrative case studies in our report and information obtained from these locations is not generalizable to other forward- deployed locations. To review DOD’s approach to managing fuel demand at forward-deployed locations, we analyzed department documents and held discussions with DOD and military service officials to gain their perspectives on issues including forward-deployed location construction and maintenance; procurement; funding procedures; and applicable DOD guidance and laws related to energy reduction, procurement, and military construction. To provide context for understanding the challenges associated with managing fuel demand at forward-deployed locations, we obtained information on fuel distribution and delivery processes and challenges in Iraq, Afghanistan, and for the two forward-deployed locations we visited. For comparison purposes, we reviewed policies and programs related to energy awareness and reduction for DOD’s permanent or U.S. facilities. In identifying opportunities for DOD to increase visibility and accountability of fuel demand management at its forward-deployed locations, we reviewed sections of the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 that require DOD to, among other things, establish a Director of Operational Energy Plans and Programs and an operational energy strategy and that require the secretaries of the military departments to designate senior officials for operational energy. To provide information on DOD’s fuel demand at selected forward- deployed locations (app. I), we asked Central Command officials to identify and collect fuel receipts and consumption data from June 1 through 30, 2008, at several forward-deployed locations in Iraq and in Afghanistan that rely heavily on generators, as opposed to commercial power supplied by the host country. The data collected only represent a snapshot in time of fuel demand at selected locations and cannot be generalized to other time periods or other forward-deployed locations. In total, Central Command collected fuel receipts and consumption data for us on two locations in Iraq (Q-West and Contingency Operating Base Adder) and one location in Afghanistan (Bagram Air Field). In addition, we collected fuel receipts and consumption data for the same time period at Camp Arifjan, Kuwait, and Camp Lemonier, Djibouti, the two locations from our case-study analysis. The missions of these locations ranged from providing logistics support to U.S. forces to supporting development and diplomacy within the region. Central Command officials told us that different military services and locations have different methods for collecting and reporting data. Therefore, to attempt to collect as similar data as possible among the various locations, we agreed that Central Command would develop a standard data collection spreadsheet for the locations to record the following information by day in June 2008: the quantity of fuel in gallons received by fuel type—JP8 jet fuel, diesel, or mobility gasoline, the quantity of fuel in gallons consumed for base support (defined as power, heating/cooling, facilities, or communications), the quantity of fuel in gallons consumed for air mobility, the quantity of fuel in gallons consumed for ground mobility (vehicles), the largest consumer of base support fuel (for example, heating/cooling) and the largest consumer of ground mobility fuel by day (for example, Mine Resistant Ambush Protected vehicle). This spreadsheet was used to collect fuel receipt and consumption data from all five of the locations we reviewed. We agreed with Central Command officials to use this spreadsheet to increase the likelihood that the locations would categorize fuel consumed for base support activities and ground and air operations similarly; however, some of the locations categorized fuel used for aerospace ground equipment differently. In an attempt to reconcile these differences, we subsequently requested that officials provide us separate data pertaining to aerospace ground equipment, but officials stated that the data were not collected in a way that could enable them to do so. Therefore, in appendix I we have noted this difference in the data illustrating fuel used by the locations for base support and ground and air operations. To determine whether the data were reliable and valid, we sent follow-up questionnaires to each of the locations reviewed, asking how the locations recorded and maintained the data provided to us and what quality assurance process they used to ensure that the data were accurate and complete. Although data collection procedures and systems varied by military service component and location, we found that the data underwent a quality review. Therefore, we concluded that the data were sufficiently reliable for descriptive purposes. We conducted our review from March 2008 through February 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides an overview of selected initiatives of the DOD Power Surety Task Force that are aimed at reducing fuel demand at forward-deployed locations. It also provides information on the status of each initiative at the time we conducted our review. In addition to the contact named above, Thomas Gosling, Assistant In addition to the contact named above, Thomas Gosling, Assistant Director; Karyn Angulo; Alissa Czyz; Gilbert Kim; James Lawson; Marie Director; Karyn Angulo; Alissa Czyz; Gilbert Kim; James Lawson; Marie Mak; and Ryan Olden made major contributions to this report. Mak; and Ryan Olden made major contributions to this report.
The Department of Defense (DOD) relies heavily on petroleum-based fuel to sustain its forward-deployed locations--particularly those that are not connected to local power grids. While weapon platforms require large amounts of fuel, DOD reports that the single largest battlefield fuel consumer is generators, which provide power for base support activities such as cooling, heating, and lighting. Transporting fuel to forward-deployed locations presents an enormous logistics burden and risk, including exposing fuel truck convoys to attack. GAO was asked to address DOD's (1) efforts to reduce fuel demand at forward-deployed locations and (2) approach to managing fuel demand at these locations. This review focused on locations within Central Command's area of responsibility. GAO visited DOD locations in Kuwait and Djibouti to learn about fuel reduction efforts and challenges facing these locations. DOD components have some efforts under way or planned to reduce fuel demand at forward-deployed locations. Many of these efforts are in a research and development phase, and the extent to which they will be fielded and under what time frame is uncertain. Notable efforts include the application of foam insulation to tent structures, the development of more fuel-efficient generators and environmental control units, and research on alternative and renewable energy sources for potential use at forward-deployed locations. In addition, during visits to Kuwait and Djibouti, GAO met with officials about local camp efforts aimed at reducing fuel demand. DOD lacks an effective approach for implementing fuel reduction initiatives and maintaining sustained attention to fuel demand management at its forward-deployed locations. Moreover, DOD faces difficulty achieving its goals to reduce dependence on petroleum-based fuel and its logistics "footprint," as well as operating costs associated with high fuel usage, because managing fuel demand at forward-deployed locations has not been a departmental priority and fuel reduction efforts have not been well coordinated or comprehensive. GAO found that DOD's current approach to managing fuel demand lacks (1) guidance directing locations to address fuel demand, (2) incentives and a viable funding mechanism to invest in fuel reduction projects, and (3) visibility and accountability for achieving fuel reduction. Although it may not be practical for DOD to decrease fuel usage at every forward-deployed location and base commanders must place their highest priority on meeting mission requirements, fuel demand is likely to remain high until the department gives systematic consideration to incorporating fuel demand in construction, maintenance, procurement, and other policy decisions for forward-deployed locations. The 2009 defense authorization act requires DOD to establish a director of operational energy and an energy strategy, providing the department with an opportunity to increase attention on improving fuel demand management.
Federal agencies’ contracts with private businesses, whether made in the normal course of agency operations or specifically related to a natural disaster declaration, are used to meet certain goals to increase participation by various types of small businesses. The Small Business Act, as amended, defines a small business generally as one that is “independently owned and operated and that is not dominant in its field of operation.” In addition, a business must meet the size standards published by SBA to be considered “small.” The act sets a governmentwide goal for small business participation of not less than 23 percent of the total value of all prime contract awards—contracts that are awarded directly by an agency—for each fiscal year. The Small Business Act sets annual prime contract dollar goals for participation by specific types of small businesses: small disadvantaged businesses (5 percent); women-owned or service-disabled, veteran-owned, (5 and 3 percent, respectively); and businesses located in historically underutilized business zones (HUBZones, 3 percent). In August 2007, SBA issued its fiscal year 2006 Goaling Report. The Goaling Report includes data on the extent to which federal agencies met their goals for awarding contracts to various types of small businesses. According to this report, federal agencies awarded 22.8 percent of their prime contracting dollars to small businesses, just short of the 23 percent statutory goal. In addition, while federal agencies collectively exceeded the goals for awarding prime contracting dollars to small disadvantaged businesses, they did not meet the goals for awarding prime contracting dollars to women-owned, HUBZone, or service-disabled veteran-owned businesses. Of the agencies we reviewed in our March 2007 report, all exceeded their agency-specific goals for awarding prime contracting dollars to small disadvantaged businesses, a subset of which are Section 8(a) firms. Generally, in order to be certified under SBA’s 8(a) program, a firm must satisfy SBA’s applicable size standards, be owned and controlled by one or more socially and economically disadvantaged individuals who are citizens of the United States, and demonstrate potential for success. Black Americans, Hispanic Americans, Native Americans, and Asian Pacific Americans are presumptively socially disadvantaged for purposes of eligibility. The personal net worth of an individual claiming economic disadvantage must be less than $250,000 at the time of initial eligibility and less than $750,000 thereafter. The general rules governing procurement are set out in federal procurement statutes and in the Federal Acquisition Regulation (FAR). Among other things, these rules require that any business receiving a prime contract for more than the simplified acquisition threshold must agree to give small business the “maximum practicable opportunity” to participate in the contract. Additionally, for contracts (or modifications to contracts) that (1) are individually expected to exceed $550,000 ($1 million for construction contracts) and (2) have subcontracting possibilities, the prime contractor generally must have in place a subcontracting plan. This plan must identify the types of work the prime contractor believes it is likely to award as subcontracts as well as the percentage of subcontracting dollars it expects to direct to the specific categories of small businesses for which the Small Business Act sets specific goals. When they award contracts, federal agencies collect and store procurement data in their own internal systems—typically called contract writing systems. The FAR requires federal agencies to report the information about procurements directly to the Federal Procurement Data System–Next Generation (FPDS-NG), GSA’s governmentwide contracting database, which collects, processes, and disseminates official statistical data on all federal contracting activities of more than $3,000. Congress has enacted several laws designed to foster small business participation in federal procurement. One of these laws, Public Law 95-507, enacted in 1978, amended section 15 of the Small Business Act (15 U.S.C. § 644) to require that all federal agencies with procurement authority establish an Office of Small and Disadvantaged Business Utilization. This office is responsible for helping oversee the agency’s functions and duties related to the awarding of contracts and subcontracts to small and disadvantaged businesses. Finally, the Stafford Act sets forth requirements for the federal response to presidentially declared disasters. It requires federal agencies to give contracting preferences, to the extent feasible and practicable, to organizations, firms, and individuals residing or doing business primarily in the area affected by a major disaster or emergency. Our March 2007 report identified the extent to which DHS, GSA, DOD, and the Corps awarded contracts directly to small businesses; the extent to which different types of small businesses received contracts; and the extent to which small businesses located in Alabama, Mississippi, and Louisiana received contracts for Katrina-related projects. Our report also noted that information on small business subcontracting plans was not consistently available for the four agencies. We found that small businesses received 28 percent of the $11 billion that DHS, GSA, DOD, and the Corps awarded directly for Katrina-related projects, but the percentages varied among the four agencies (see fig. 1). We assessed the agencies individually and found that DHS had awarded the highest dollar amount to small businesses—about $1.6 billion dollars— and that GSA had awarded the highest percentage of its dollars to small businesses—72 percent of about $658 million. Among categories of small businesses, small disadvantaged businesses received 7 percent of the approximately $11 billion that the four agencies awarded to both large and small businesses. Other categories of small businesses, including women- and veteran-owned businesses and businesses located in HUBZones, received from 2 to 4 percent (see fig. 2). Contracting dollars awarded directly to businesses can be counted in more than one category, so the dollars awarded to various types of small businesses are not mutually exclusive. Small businesses in Alabama, Mississippi, and Louisiana received 66 percent of the $1.9 billion in Katrina-related contracting dollars awarded to local businesses by the four agencies we reviewed. Among the three states, the proportion of Katrina-related contracting dollars awarded to small businesses was largest in Mississippi (75 percent), followed by Alabama and Louisiana at 65 percent and 62 percent, respectively, of the dollars awarded (table 1). In general, these small local businesses received contracting dollars directly from the four agencies to provide trailers, administrative and service buildings, restoration activities, and other supportive services. In two respects, key information on small business subcontracting plans was not consistently available in official procurement data systems for the four agencies. First, primarily with respect to DHS and GSA contract actions, the official procurement data system had no information at all on whether the agencies required subcontracting plans for 70 percent or more of their contracting funds. This database should have contained information on whether the agencies required subcontracting plans in these instances. For DOD and the Corps, their system lacked information on whether they required subcontracting plans for one percent of their contracting funds. Table 2 shows the total amounts each agency awarded to large businesses for contracts valued over $500,000 (column 2) and the extent to which no information was available in the official procurement data system on whether the agencies required subcontracting plans for those contracts (column 6). Second, the procurement data systems showed that the agencies had determined that subcontracting plans were not required for contracts representing 12 to 77 percent of the dollars they awarded to large businesses for Katrina-related projects. Agencies are required to document their reasons for these determinations. However, information on the four agencies’ reasons for not requiring these plans, which should have been readily available, was incomplete. Overall, procurement officials from the four agencies were able to explain some of the missing or incomplete information on subcontracting plans by, for example, identifying data entry errors or providing evidence of the agencies’ reasons for not requiring the plans. For example, DHS officials determined that $545 million of the DHS contracting funds the procurement data system showed as not requiring a plan had been miscoded and should have been entered in the procurement system under a different category that listed the contracts as having “no subcontracting possibilities.” In another instance, GSA officials did not require a subcontracting plan for a $26 million contract for ice because they believed that the urgency of the situation required buying and shipping the ice faster than normal procedures would allow. Nonetheless, at the time we issued our report contracting dollars remained for each agency with incomplete subcontracting plan information that agency officials had not been able to explain. These amounts ranged from $3.3 million for DOD (excluding the Corps) to $861 million for DHS. In our report, we concluded there was little doubt that Hurricane Katrina posed challenges to the agencies, which had to award contracts quickly while still following government procurement rules, especially those regarding subcontracting plans. Certain choices, such as documenting compliance with these requirements at a later date (something GSA and DOD officials indicated was the case), might have been understandable. Nonetheless, more than a year after the hurricane, we reported that a substantial amount of information about the four agencies’ subcontracting requirements remained incomplete. Conclusively demonstrating compliance with the rules about subcontracting plans is important for reasons beyond just documentation. First, in requiring these plans agencies commit prime contractors to specific goals for providing opportunities to small businesses. Second, the agencies have tools— incentives as well as sanctions—that they can use to ensure that the contractors engage in good faith efforts to meet their small business subcontracting goals. In doing so, the agencies ensure compliance with federal procurement regulations and help guarantee that small businesses have all of the practical opportunities to participate in federal contracts that they are supposed to have. Because so much key information about subcontracting plans was incomplete in federal procurement data systems and, at the conclusion of our review, remained unresolved, we cannot tell the extent to which the agencies are complying with the regulations. Furthermore, the lack of transparency surrounding much of the agencies’ subcontracting data—missing information on plans when contracts appear to meet the criteria for having them—may lead to unwarranted perceptions about how the federal procurement system is working, particularly in terms of the government’s stated preference for contracting with small businesses. To ensure compliance with federal contracting regulations and more transparently disclose the availability of subcontracting opportunities for small businesses, we recommended that the Secretaries of Homeland Security and Defense and the Administrator of General Services issue guidance reinforcing, among other things, the necessity for documenting in publicly available sources the agencies’ contracting decisions, particularly in instances when the agencies decided not to require subcontracting plans. Moreover, we recommended that the agencies consider asking their respective Inspectors General to conduct a review to ensure that this guidance and related requirements were being followed. The agencies generally agreed with our recommendations, and GSA has already implemented them. Specifically, in March 2007, GSA issued guidance to its contracting officers reminding them of the importance both of the subcontracting plan requirements and of documenting key decisions affecting acquisitions, including any decisions impacting subcontracting plan requirements. In addition, GSA will include a review of compliance with subcontracting plan requirements in its annual internal procurement management reviews. DOD and DHS officials have stated that they are working on implementing these recommendations. For example, Corps officials indicated they are developing a new training module on the requirements regarding subcontracting plans and plan to deliver this to its contracting officers. SBA has governmentwide responsibilities for advocating that federal agencies use small businesses as prime contractors, and that prime contractors give small businesses opportunities to participate as subcontractors in federal contracts awarded to large businesses. To meet its responsibilities, SBA negotiates annual procurement goals with federal executive agencies to achieve the 23 percent governmentwide goal for contract dollars awarded directly by federal agencies. In addition, SBA is responsible for assigning Procurement Center Representatives (PCRs) to major contracting offices to implement small business policies and programs. Responsibilities of PCRs include reviewing proposed acquisitions and recommending various types of small business sources; recommending contracting methods to increase small business prime contracting opportunities; conducting reviews of the contracting office to ensure compliance with small business policies; and working to ensure that small business participation is maximized through subcontracting opportunities. Each federal agency that has procurement authority is required to have an OSDBU. The OSDBU is responsible for helping to oversee the agency’s functions and duties related to the awarding of contracts and subcontracts to small and disadvantaged businesses. For example, the office must report annually on the extent to which small businesses are receiving their fair share of federal procurements, including contract opportunities under programs administered under the Small Business Act. The Small Business Act requires that OSDBU directors be responsible to and report only to agency heads or their deputy. By providing immediate access to top decision-makers, Congress intended to enhance the directors’ ability to advocate effectively for small and disadvantaged businesses. However, in 2003 we reported that 11 of the 24 federal agencies we reviewed were not in compliance with this provision. As of our most recent follow-up work, nine of the agencies reviewed were out of compliance (the Departments of Agriculture, Commerce, Education, Health and Human Services, Justice, State, the Interior, and the Treasury; and the Social Security Administration). The Environmental Protection Agency has complied, and the Federal Emergency Management Agency has been subsumed into the Department of Homeland Security, which has an OSDBU with a director reporting to the highest agency levels. Most of the agencies that provided comments on this work disagreed with our conclusion that the reporting relationships did not comply with this provision of the Small Business Act. However, none of the legal arguments that the agencies raised caused us to revise our conclusions or recommendations. For example, the Departments of Agriculture and Treasury had delegated OSDBU responsibilities to lower level officials and argued in their comments to us that because the Small Business Act does not explicitly prohibit such a delegation, their reporting relationships complied with this provision. However, we noted that the lack of an express prohibition on such a delegation does not necessarily mean that it is thereby permitted and cited case history supporting our belief that the delegation of authority may be withheld by implication, which we believe this section of the Small Business Act does. Because the OSDBU directors at agencies that do not comply with this provision of the Act do not have a direct reporting relationship with their agencies’ head or deputy, the reporting relationships potentially limit their role as effective advocates for small and disadvantaged businesses. At your request, we have ongoing work evaluating the efforts of SBA and, to some extent, OSDBUs within federal agencies, to advocate on behalf of small disadvantaged businesses and those in SBA’s 8(a) business development program. As you are aware, both SBA and agencies’ OSDBUs play important roles in advocating federal contracting opportunities for small disadvantaged businesses and 8(a) firms. SBA certifies the firms’ eligibility for one or both designations and, as I noted earlier, has a governmentwide advocacy role for all types of small businesses, and OSDBUs advocate for contracting opportunities within each agency by, for example, reviewing proposed contracts and making recommendations to contracting officials about those they believe could be awarded to a small business, including disadvantaged businesses. The Small Business Act authorizes SBA’s 8(a) Business Development Program as one of the federal government’s vehicles to help small disadvantaged businesses compete in and access the federal procurement market. To be eligible for the program, a firm must, among other things, meet SBA’s applicable size standards for small businesses and be owned and controlled by one or more socially and economically disadvantaged individuals who are U.S. citizens who demonstrate the potential for success. Firms receiving 8(a) certification are eligible for contracts that federal agencies set aside for them. To qualify for SDB certification, a firm must be owned or controlled by one or more socially and economically disadvantaged individuals or a designated community development organization. Section 8(a) firms automatically qualify as SDBs, but other firms may apply for SDB-only certification. Mr. Chairman, you recently wrote to us expressing concern about whether SBA was taking an appropriate, proactive approach to advocate that small disadvantaged businesses—those in SBA’s 8(a) and SDB programs—have access to federal government contracts. As you know, procurement decisions—who gets each federal contract—ultimately rest with the agencies’ contracting offices, not with their OSDBUs and not with SBA. Neither SBA nor the OSDBUs can force contracting officials to give a contract to a small business. However, as language in the Small Business Act suggests, they do have an important role to play in advocating that small businesses have the “maximum practicable opportunity” to participate. Consequently, our evaluation will focus on the advocacy role that SBA and OSDBUs play regarding these opportunities for small businesses. Specifically, it will include assessment of the actions SBA takes to encourage that prime contracting goals for small disadvantaged businesses are met; the extent to which such goals have been met; whether federal agencies are having difficulty awarding contracts to 8(a) firms; and SBA’s efforts to advocate that small disadvantaged businesses have the maximum practicable opportunity to participate as subcontractors for prime federal contracts. In our evaluation, we also plan to assess actions by selected agency OSDBUs in serving as advocates for 8(a) firms. Our evaluations of contracting in the aftermath of Hurricane Katrina and agency OSDBUs provide useful perspectives as we move forward in our examination of the important advocacy roles undertaken by SBA and the OSDBUs. When we complete the design phase of this work, we will reach agreement with you on our reporting objectives and the anticipated issuance date. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact William B. Shear at (202) 512-8678 or shearw@gao.gov. Individuals making key contributions to this testimony included Bill MacBlane, Assistant Director; Emily Chalmers; Nancy Eibeck; Julia Kennon; Tarek Mahmassani; Lisa Moore; Paul Thompson; Myra Watts-Butler; and Bill Woods. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government's long-standing policy has been to use its buying power to maximize procurement opportunities for various types of small businesses. GAO initiated work and completed a report in March 2007 under the Comptroller General's authority describing the extent to which small businesses participated in contracting opportunities related to Hurricane Katrina. This testimony discusses (1) results from the March 2007 GAO report, including the amounts that small and local businesses received directly from federal agencies from contracts related to Hurricane Katrina and the lack of required information in official procurement data systems on subcontracting plans, (2) information from two previous GAO reports regarding the small business advocacy responsibilities of Small Business Administration (SBA) and federal agencies that award contracts, and (3) GAO work on SBA's efforts to advocate for small disadvantaged businesses, and similar efforts by entities within selected agencies. In conducting the studies discussed in this testimony, GAO analyzed agency contract data, reviewed federal acquisition regulations, and interviewed agency procurement officials; we also sent a questionnaire to agency officials regarding Office of Small and Disadvantaged Business Utilization (OSDBU) reporting relationships; reviewed organizational charts and other pertinent information; analyzed relevant laws, legislative history, and court cases; and, updated information on agency actions on our recommendations. Small businesses received 28 percent of the $11 billion in contracts that Department of Homeland Security (DHS), General Services Administration (GSA), Department of Defense (DOD), and the Army Corps of Engineers (Corps) awarded directly for Katrina-related projects. Information on whether DHS and GSA required subcontracting plans was generally not available in the federal government's official procurement database for 70 percent or more of the contracting dollars each agency awarded for activities related to Hurricane Katrina. This database should have contained information on whether or not the agencies required subcontracting plans in these instances. The lack of transparency surrounding much of the agencies' subcontracting data may lead to unwarranted perceptions about how the federal procurement system is working, particularly in terms of the government's stated preference for contracting with small businesses. GAO recommended in its March 2007 report that DHS, GSA, and DOD take steps designed to ensure compliance with federal contracting regulations and more transparently disclose the extent to which subcontracting opportunities are available to small businesses. These agencies generally agreed with GAO's recommendations. GSA has implemented them while DOD and DHS indicate they are in the process of doing so. SBA has governmentwide responsibilities for advocating that federal agencies use small businesses as prime contractors for federal contracts and set goals for and encourage the use of small businesses as subcontractors to large businesses receiving federal contracts. Similarly, within each federal agency there is an OSDBU that plays an advocacy role by overseeing the agency's duties related to contracts and subcontracts with small and disadvantaged businesses. The Small Business Act requires that the OSDBU director be responsible to and report only to agency heads or their deputies. In 2003, GAO reported that 11 of 24 agencies reviewed did not comply with this provision. While most of the agencies disagreed with our conclusion, none of the legal arguments that they raised changed GAO's recommendations. Because the OSDBU directors at these agencies do not have a direct reporting relationship with their agencies' heads or deputies, the reporting relationships potentially limit their role as effective advocates for small and disadvantaged businesses. GAO is presently evaluating SBA's and agency OSDBUs' advocacy efforts. This evaluation includes an assessment of the actions SBA takes to advocate that small disadvantaged businesses receive opportunities to participate as subcontractors under federal prime contracts and encourage that prime contracting goals for these businesses are met. Also, the evaluation addresses selected OSDBUs' actions to advocate for certain small business firms.
The Medicaid drug rebate program provides savings to state Medicaid programs through rebates for outpatient prescription drugs that are based on two prices per drug that manufacturers report to CMS: best price and AMP. These manufacturer-reported prices are based on the prices that manufacturers receive for their drugs in the private market and are required to reflect certain financial concessions such as discounts. Pharmaceutical manufacturers sell their products directly to a variety of purchasers, including wholesalers, retailers such as chain pharmacies, and health care providers such as hospitals that dispense drugs directly to patients. The prices that manufacturers charge vary across purchasers. The private market also includes PBMs, which manage prescription drug benefits for third-party payers such as employer-sponsored health plans and other health insurers. PBMs may negotiate payments from manufacturers to help reduce third-party payers’ costs for prescription drugs; those payments may be based on the volume of drugs purchased by the payers’ enrollees. PBMs also may operate mail-order pharmacies, purchasing drugs from manufacturers and delivering them to their clients’ enrollees. The amount a manufacturer actually realizes for a drug is not always the same as the price that is paid to the manufacturer at the time of sale. Manufacturers may offer purchasers rebates or discounts that may be realized after the initial sale, such as those based on the volume of drugs the purchasers buy during a specified period or the timeliness of their payment. In some cases, purchasers negotiate a price with the manufacturer that is below what a wholesaler pays the manufacturer for a given drug. In such a circumstance, a wholesaler may sell the drug to the purchaser at the lower negotiated price and then request from the manufacturer a “chargeback”—the difference between the price the wholesaler paid for the drug and the purchaser’s negotiated price. The statute governing the Medicaid drug rebate program and the standard rebate agreement that CMS signs with each manufacturer define best price and AMP and specify how those prices are to be used to determine the rebates due to states. In the absence of program regulations, CMS has issued program memoranda in order to provide further guidance to manufacturers regarding how to determine best price and AMP, some of which were in response to questions that arose regarding the methods that manufacturers were using to determine those prices. The rebate agreement states that in the absence of specific guidance on the determination of best price and AMP, manufacturers may make “reasonable assumptions” as long as those assumptions are consistent with the “intent” of the law, regulations, and the rebate agreement. As a result, price determination methods may vary across manufacturers, particularly with respect to which transactions they consider when determining best price and AMP. Under the rebate statute, best price is the lowest price available from the manufacturer to any wholesaler, retailer, provider, health maintenance organization (HMO), or nonprofit or government entity, with some exceptions. Best price is required to be reduced to account for cash discounts, free goods that are contingent on purchase requirements, volume discounts and rebates (other than rebates under this program), as well as—according to the rebate agreement and a CMS program memorandum—cumulative discounts and any other arrangements that subsequently adjust the price actually realized. Prices charged to certain federal purchasers, eligible state pharmaceutical assistance programs and state-run nursing homes for veterans, and certain health care facilities— including those in underserved areas or serving poorer populations—are not considered when determining best price. Prices available under endorsed Medicare discount card programs, as well as those negotiated by Medicare prescription drug plans or certain retiree prescription drug plans, are similarly excluded from best price. Nominal prices—prices that are less than 10 percent of AMP—also are excluded from best price. AMP is defined by statute as the average price paid to a manufacturer for the drug by wholesalers for drugs distributed to the retail pharmacy class of trade. The transactions used to calculate AMP are to reflect cash discounts and other reductions in the actual price paid, as well as any other price adjustments that affect the price actually realized, according to the rebate agreement and a CMS program memorandum. Under the rebate agreement, AMP does not include prices to government purchasers based on the Federal Supply Schedule, prices from direct sales to hospitals or HMOs, or prices to wholesalers when they relabel drugs they purchase under their own label. The relationship between best price and AMP determines the unit rebate amount and, thus, the size of the rebate that states receive for a brand name drug. The basic unit rebate amount is the larger of two values: the difference between best price and AMP, or 15.1 percent of AMP. The closer best price is to AMP, the more likely the rebate for a drug will be based on the minimum amount—15.1 percent of AMP—rather than the difference between the two values. A state’s rebate for a drug is the product of the unit rebate amount and the number of units of the drug paid for by the state’s Medicaid program. In 2000, rebates were based on the minimum amount for about half of the brand name drugs covered under the rebate program; for the remaining drugs, rebates were based on the difference between best price and AMP. Manufacturers pay rebates to states on a quarterly basis. They are required to report best price and AMP for each drug to CMS within 30 days of the end of each calendar quarter. Once CMS receives this information, the agency uses the rebate formula to calculate the unit rebate amount for the smallest unit of each drug, such as a tablet, capsule, or ounce of liquid. CMS then provides the unit rebate amount to the states. Each state determines its Medicaid utilization for each covered drug—as measured by the total number of the smallest units of each dosage form, strength, and package size the state paid for in the quarter—and reports this information to the manufacturer within 60 days of the end of the quarter. The manufacturer then must compute and pay the rebate amount to each state within 30 days of receiving the utilization information. Manufacturers are required to report price adjustments to CMS when there is a change in the prices they reported for a prior quarter. These adjustments may result from rebates, discounts, or other price changes that occur after the manufacturers submit prices to CMS. Manufacturers also may request that CMS recalculate the unit rebate amounts using revised prices if they determine that their initially reported prices were incorrect due to, for example, improper inclusion or exclusion of certain transactions. In 2003, CMS issued a final rule that, effective January 1, 2004, limits the time for manufacturers to report any price adjustments to 3 years after the quarter for which the original price was reported. The minimal oversight by CMS and OIG of manufacturer-reported prices and price determination methods does not ensure that those prices or methods are consistent with program criteria, as specified in the rebate statute, rebate agreement, and CMS program memoranda. CMS conducts limited reviews of prices and only reviews price determination methods when manufacturers request recalculations of prior rebates. OIG has issued four reports on audits of manufacturer-reported prices since the program’s inception in 1991. OIG reported that, in the course of its work, its efforts were hampered both by unclear CMS guidance on determining AMP and by a lack of manufacturer documentation. In some instances, OIG found problems with manufacturers’ price determination methods and reported prices. However, CMS has not followed up with manufacturers to make sure that the identified problems with prices and price determination methods have been resolved. As part of the agency’s administration of the Medicaid drug rebate program, CMS reviews drug prices submitted by approximately 550 manufacturers that participate in the program. Each quarter, CMS conducts automated data edit checks on the best prices and AMPs for about 25,000 drugs to identify reporting errors. These checks are intended to allow CMS to make sure all drugs for which manufacturers report prices are in its database of Medicaid-covered drugs and to ensure that those prices are submitted in the correct format. The agency’s automated data checks also are intended to ensure that the correct price is used when there are multiple prices for the same drug. When data checks indicate a potential reporting error, CMS sends an edit report to the manufacturer asking for corrected drug prices. However, CMS does not have a mechanism in place to track whether, in fact, manufacturers submit corrected prices. CMS sometimes identifies other price reporting errors when it calculates the unit rebate amount for a drug, but the agency does not follow up with manufacturers to verify that errors have been corrected. CMS will notify a manufacturer of any missing price data for drugs in its database or any large deviations from previous unit rebate amounts. For example, CMS would send a report to a manufacturer that had a unit rebate amount for a drug that deviated from that of the prior quarter by more than 50 percent. It would be up to that manufacturer to indicate whether or not the underlying reported prices were, in fact, correct. If a manufacturer determined that there were problems with the reported price for a drug— such as incorrect unit pricing or typographical errors like misplaced decimals—it would send corrected data to CMS prior to or with future price submissions. In this situation, the manufacturer also would recalculate the unit rebate amount and, once invoiced by the states with total utilization for the drug paid for by Medicaid, would send the rebate payment to those states based on the recalculated unit rebate amount. If a manufacturer did not send revised pricing data to CMS, then the unit rebate amount would remain the same. In 2000, CMS generated approximately 150 reports detailing these 50 percent deviations, according to an agency official. The agency did not track how many unit rebate amounts were changed as a result or any effect on rebates. CMS does not generally review the methods and underlying assumptions that manufacturers use to determine best price and AMP, even though these methods and assumptions can have a substantial effect on rebates. While the rebate agreement requires manufacturers to maintain documentation of the assumptions underlying their price determination methods, CMS does not verify that such documentation is kept and rarely requests it. Furthermore, CMS does not generally check to ensure that manufacturers’ assumptions and price determination methods are consistent with the rebate statute and rebate agreement. CMS reviews the methodologies employed to determine best price and AMP only when manufacturers request recalculations of prior rebates. A manufacturer may request a recalculation of a prior rebate any time it changes the methods it uses to determine best price or AMP. CMS requires the manufacturer to submit both its original and its revised methods for determining those prices when requesting a recalculation of prior rebates, so that it can evaluate whether the revised methods are consistent with the rebate statute, rebate agreement, and program memoranda. Six approved recalculations, for which we could obtain data, reduced prior rebates to states by a total of more than $220 million. An additional approved recalculation required the manufacturer to pay states an additional $388,000. OIG has issued four reports on audits of manufacturer-reported prices since the program’s inception in 1991. Three of the four OIG reports documented limitations to OIG’s ability to verify drug prices. OIG reported that its efforts were hampered by unclear CMS guidance on determining AMP, by a lack of manufacturer documentation, or by both. In particular, OIG found that a lack of specificity on how the “retail pharmacy class of trade” was defined limited its efforts to verify AMP. Both the rebate statute and rebate agreement define AMP as the average price paid by wholesalers for drugs distributed to the retail pharmacy class of trade, with some exceptions. OIG officials told us that program memoranda issued by CMS have not provided sufficient guidance on AMP to allow OIG to audit manufacturers’ methods for determining AMP. Despite these limitations, in some instances OIG was able to identify some problems with the accuracy of manufacturers’ reported prices; however, CMS has not followed up with manufacturers to make sure that these problems with prices and price determination methods have been resolved. In its first review of manufacturer-reported prices in 1992, OIG found that it could not verify the AMPs reported by the four manufacturers it reviewed. OIG found it could not evaluate the methods these manufacturers used to determine AMP because neither the rebate statute nor CMS had provided sufficiently detailed instructions on methods for calculating AMP. OIG therefore advised CMS that it planned no future AMP data audits until CMS developed a specific written policy on how AMP was to be calculated. CMS disagreed, saying that the rebate statute and rebate agreement had already established a methodology for computing AMP, and stressed that this methodology was clarified, at manufacturer request, on an as-needed basis through conversations with individual manufacturers. In its second review of manufacturer-reported prices, OIG, in 1995, attempted to verify one manufacturer’s recalculation request. While the OIG reported that it could not complete its analysis because of inadequate manufacturer documentation, it was able to identify some manufacturer errors in determining AMP. In its review, OIG found that the manufacturer had miscalculated its revised AMP because it included “free goods” specifically excluded in the rebate agreement, miscalculated cash discounts, and improperly included sales rebates applicable to a period other than the quarter being audited. OIG recommended that CMS have the manufacturer revise its AMP data. Although CMS agreed with OIG’s recommendations, as of October 2004, it had not required any such revision of the audited manufacturer’s AMP determinations. In its third review, conducted in 1997, OIG attempted to review a manufacturer’s recalculation request but again reported that it was unable to complete its evaluation because of a lack of specific guidance on determining AMP and a lack of manufacturer documentation supporting its revised AMP. In the absence of guidance from CMS, OIG defined retail pharmacy class of trade for this audit to include only independent and chain pharmacies that sold drugs directly to the public. Therefore, OIG recommended that CMS ask the manufacturer to exclude from the calculation of AMP transactions that OIG determined were to nonretail entities such as mail-order pharmacies, nursing home pharmacies, independent practice associations, and clinics. OIG also found that the manufacturer used a flawed methodology to identify certain sales that it had included in the retail class of trade, and thus AMP. As a result, OIG recommended that CMS ask the manufacturer to exclude those sales from AMP unless the manufacturer could provide additional documentation to support the inclusion of those sales in AMP. Although CMS did not agree with OIG’s definition of retail pharmacy class of trade, CMS concurred with OIG’s recommendation to ask the manufacturer to recalculate AMP. As of October 2004, CMS had not required any revision of this manufacturer’s AMP determinations. In its fourth review of manufacturer-reported prices issued in 2001, OIG investigated how manufacturers were treating repackagers—entities such as HMOs that repackage or relabel drugs under their own names—in their best price determinations. The work followed up on previous work OIG conducted in response to a congressional inquiry in 1999. The rebate statute states that HMO sales are required to be included in best price determinations. CMS’s June 1997 program memorandum stated that sales to other manufacturers that repackage the drugs are to be excluded from best price determinations. However, the rebate statute, rebate agreement, and CMS program memoranda did not address how HMOs should be treated when they act as repackagers. In a letter issued in response to the 1999 congressional request, OIG reported that excluding drug sales to two HMOs that acted as repackagers from best price determinations lowered state rebate amounts by $27.8 million in fiscal year 1998. In July 2000, CMS issued an additional program memorandum to manufacturers stating that sales to an HMO should be considered in best price determinations regardless of whether the HMO was a repackager. In 2001, OIG issued its fourth review, reporting that states lost $80.7 million in rebates in fiscal year 1999 due to improperly excluded drug sales to HMO repackagers. In September 2004, a CMS official told us that CMS planned to release a program memorandum instructing manufacturers to revise prior rebates for which they had excluded sales to HMOs from best price. However, CMS does not have a mechanism in place to track that manufacturers have made these rebate adjustments and therefore cannot verify that manufacturers have made or will make these adjustments. OIG officials told us that, despite the program releases issued by CMS, they remain unable to evaluate AMP because of the lack of clear CMS guidance, particularly related to the retail pharmacy class of trade and treatment of PBM transactions. In October 2004, OIG officials told us that they were working with CMS to review four manufacturers’ recalculation requests and as part of this work were evaluating the methods manufacturers have used to determine prices. OIG officials also told us that they may conduct additional audits because of the number of recent manufacturer recalculation requests—18 requests received between September and December of 2003—and the significant financial impact the potential rebate adjustments would have on state Medicaid programs. However, in light of OIG’s remaining concerns about CMS guidance, OIG officials told us that their current audits—and any future audits—likely would be limited to descriptions of how inclusion and exclusion of certain sales in price determinations would affect rebates. We found considerable variation in the methods that manufacturers used to determine best price and AMP. Manufacturers are allowed to make reasonable assumptions when determining best price and AMP, as long as those assumptions are consistent with the law and the rebate agreement. The assumptions often pertain to the transactions, including discounts or other price reductions, that are considered in determining best price and AMP. We found that in some cases manufacturers’ assumptions could have led to lower rebates and in other cases to higher rebates. Manufacturers can later revise their assumptions and request recalculations of previously paid rebates, which can result in states repaying any excess rebates. We found that manufacturers made varying assumptions about which sales to include and exclude from their calculations of AMP. For example, some included sales to a broad range of facilities in AMP, excluding only transactions involving facilities explicitly excluded by the law, rebate agreement, or CMS program memoranda. In contrast, others included sales to a narrower range of purchasers—only those purchasers explicitly included in AMP by the law, rebate agreement, or CMS program memoranda. Manufacturers also differed in how they treated certain types of health care providers that are not explicitly addressed by the law, rebate agreement, or CMS program memoranda. For example, some manufacturers included sales to physician groups in AMP, while others did not. These assumptions can affect the reported prices and, in turn, the size of rebates paid to states. Some manufacturers did not account for certain “administrative fees” paid to PBMs when determining best price or AMP. The statute and rebate agreement require that best price incorporate volume-based discounts. Further, according to the rebate agreement and a CMS program memorandum, both best price and AMP are to account for cumulative discounts or other arrangements that subsequently adjust the prices actually realized. While CMS has acknowledged that not all PBM arrangements will affect best price and AMP, the agency has advised manufacturers that administrative fees, incentives, promotional fees and chargebacks, as well as all discounts and rebates provided to purchasers, should be considered in determinations of best price and AMP when they are associated with sales that are to be considered in those prices. When a PBM acts as a mail-order pharmacy and takes possession of drugs, it is a purchaser. We found that while the basis for the administrative fees paid to PBMs varied among the manufacturers we reviewed, the fees often were based on a utilization measure, such as the sales volume of drugs used by the enrollees of the PBM’s clients. To the extent that PBMs’ purchases for their mail-order pharmacies contributed to the utilization measures used to determine their administrative fees, the fees for the mail- order portion of their business resemble a volume-based discount that adjusts the price actually realized. Some manufacturers told us that they accounted for the portion of administrative fees paid to PBMs associated with the PBMs’ mail-order pharmacies in their determinations of best price or AMP. In contrast, others said they did not incorporate this portion of any administrative fees paid to PBMs in either best price or AMP. Some of those manufacturers characterized these fees as payments for services rather than adjustments to prices. Excluding administrative fees from the determination of best price or AMP could have reduced rebates below what they would have been had the manufacturers included them when determining those prices. For one manufacturer, for example, if administrative fees paid to PBMs associated with their mail-order pharmacy purchases had been included in the manufacturer’s determination of best price and AMP, rebates for 11 drugs would have been up to 16 percent higher in the third quarter of 2000 and up to 12 percent higher in the fourth quarter of 2000. The ultimate impact on rebates to states depends on how many manufacturers excluded these fees from reported prices, the volume of those manufacturers’ sales to PBM mail-order pharmacies, as well as the prices and utilization of the relevant drugs. Manufacturers also differed in how they accounted for certain transactions involving prompt payment discounts. Both the rebate agreement and an applicable CMS program memorandum specify that best price and AMP are to reflect cumulative discounts or other arrangements that subsequently adjust the prices actually realized. In examining manufacturers’ practices, we found that they generally provided a prompt payment discount of 2 percent of the purchase price to wholesalers and others that purchased drugs from them directly, when they paid within a specified period. In most cases, when the manufacturers we reviewed sold a drug directly to a purchaser, they reduced the purchaser’s price by any applicable prompt payment discount when determining best price and AMP. When the transaction also involved a chargeback arrangement, manufacturers’ methods differed. A chargeback involves one drug passing from a manufacturer through a wholesaler to a purchaser, so the chargeback amount and the prompt payment discount together affect the amount the manufacturer actually realizes for the drug. (See fig. 1.) Some manufacturers calculated the net price as their price to the wholesaler, reduced by both the prompt payment discount and the chargeback amount for those drugs, when determining best price and AMP. Other manufacturers, however, considered any prompt payment discount given to the wholesaler separately from any chargeback amount and thus did not incorporate the effect of both price reductions when determining best price and AMP. Some of these manufacturers indicated that they did not combine these price reductions because the price reductions occurred in two unrelated transactions to two separate purchasers. In some cases, not accounting for the effect of both price reductions—the prompt payment discount and the chargeback—in the determination of best price and AMP reduced rebates below what they otherwise would have been. For example, rebates for three drugs in our sample would have been 3 to 5 percent higher had the manufacturers considered the effects of both price reductions when determining the best prices and AMPs; for seven other drugs, rebates would not have changed. The ultimate impact on rebates to states depends on how many manufacturers adopted this approach as well as the sales prices and utilization of the relevant drugs. When determining best price and AMP, some manufacturers adopted methods that could have raised rebates. For example, although the rebate agreement excludes from AMP sales through the Federal Supply Schedule and direct sales to hospitals and HMOs, which often involve relatively low prices, one manufacturer included these sales in its calculations. However, the manufacturer used list prices in the calculation of AMP instead of the actual prices associated with the sales that were to be excluded from the calculation. This approach, which diverged from the rebate agreement and applicable CMS program memoranda, could have resulted in artificially high AMPs, which in turn could have raised rebates. In addition, some manufacturers included in the determination of best price the contract prices they had negotiated with purchasers, even if they made no sales at those prices during the reporting quarter. This practice resulted in a lower best price in some cases, which may have increased rebates to states. One manufacturer, however, indicated that it later might revise this practice and request recalculations to recoup any excess rebates it had already paid. Manufacturers have up to 3 years to make such revisions. The rebates that manufacturers pay to states are based on a range of prices and financial concessions that manufacturers make available to entities that purchase their drugs, but may not reflect certain financial concessions manufacturers offer to other entities in today’s complex market. In particular, the rebate program does not clearly address certain concessions that are negotiated by PBMs on behalf of third-party payers. The rebate program did not initially address these types of concessions, which are relatively new to the market. CMS’s subsequent guidance to manufacturers has not clearly stated how manufacturers should treat these concessions in their determinations of best price and AMP. Certain manufacturer financial concessions that are negotiated by PBMs on behalf of their third-party payer clients, such as employer-sponsored health plans and other health insurers, are not clearly reflected in best price or AMP. PBMs, in one of the roles they play in the market, may negotiate payments from manufacturers to help reduce their third-party payer clients’ costs for prescription drugs. (In these circumstances, the third-party payer does not purchase drugs directly from the manufacturer but instead covers a portion of the cost when its enrollees purchase drugs from pharmacies.) The basis of these PBM-negotiated manufacturer payments varies. For example, manufacturers may make a payment for each unit of a drug that is purchased by third-party payer enrollees or may vary payment depending on a PBM’s ability to increase the utilization, or expand the market share, of a drug. The payment may be related to a specific drug or a range of drugs offered by the manufacturer. The amount of financial gain PBMs receive from these negotiated payments also varies. A PBM may pass on part or all of a manufacturer’s payment to a client, depending on the terms of their contractual relationship. When a PBM passes on the entire manufacturer payment, the manufacturer may pay the PBM a fee to cover the costs of administering the program under which the payments are made. A PBM also may negotiate a manufacturer payment for each unit of the drug purchased that includes a fee, and the PBM may retain a part of that payment as compensation. Some PBM clients may receive smaller discounts on drug prices at the pharmacy in exchange for receiving all or a larger share of the manufacturer payments, while other clients may receive greater discounts on drug prices in exchange for the PBM retaining a larger share of the manufacturer payment. Manufacturers may not be parties to the contracts that PBMs have with their clients and so may not know the financial arrangements between the PBMs and their clients. These types of financial arrangements between manufacturers and PBMs are a relatively new development in the market. When the program began in 1991, PBMs played a smaller role in the market, managing fewer covered lives and providing a more limited range of services—such as claims processing—for their clients. Since then, PBMs’ role has grown substantially, contributing to a market that is much more complex, particularly with respect to the types of financial arrangements involving manufacturers. PBMs now commonly negotiate with manufacturers for payments on behalf of their clients, in addition to providing other services. Although complete data on the prevalence and magnitude of PBM-negotiated manufacturer payments are not readily available, PBM officials and industry experts have said that these and other manufacturer payments to PBMs are a large portion of PBMs’ earnings; further, recent public financial information suggests that manufacturer payments to PBMs as a whole are substantial and key to PBMs’ profitability. CMS has acknowledged the complexity that arrangements between manufacturers and PBMs introduce into the rebate program but has not clearly addressed how these arrangements should be reflected in manufacturer-reported prices. In 1997, CMS issued program memoranda that noted new types of arrangements involving manufacturer payments to PBMs and attempted to clarify whether those arrangements should be reflected in best price and AMP. However, in a program memorandum issued shortly thereafter, CMS stated that there had been confusion concerning the intent of the previous program memoranda and that the agency had “intended no change” to program requirements. At the time, CMS said that staff were reexamining the issue and planned to shortly clarify the agency’s position. As of January 2005, CMS had not issued such clarifying guidance. When we asked how PBM-negotiated manufacturer payments should be reflected in best price and AMP when PBMs have negotiated on behalf of third parties, CMS officials with responsibility for issuing program memoranda advised us that they could comment only on specific situations. They stated that financial arrangements among entities in the market are complex and always changing; in their view, the market is too complicated for them to issue general policy guidance that could cover all possible cases. Rather, these officials told us that they make determinations about PBM payments on a case-by-case basis, but only when manufacturers contact them regarding this issue. Within the current structure of the rebate formula, additional guidance on how to account for manufacturer payments to PBMs could affect the rebates paid to states, although whether rebates would increase or decrease as a result, and by how much, is uncertain. Because of the structure of the rebate formula, any change in the determination of best price and AMP could raise or lower rebates for any given drug, depending on how the change affects the relationship between those prices. Incorporating PBM-negotiated manufacturer payments into the rebate determination could decrease the unit rebate amount for a drug if, for example, it reduced AMP but had no effect on best price. Alternatively, if such a change increased the difference between AMP and best price for a drug, the unit rebate amount could increase. The importance of Medicaid rebates to states has grown as Medicaid spending on prescription drugs has risen. To determine the level of rebates that manufacturers pay to states, the rebate program relies on manufacturer-reported prices, which are based on the prices and financial concessions available in the private pharmaceutical market. CMS, however, has not provided clear program guidance for manufacturers to follow when determining those prices. This has hampered OIG’s efforts to audit manufacturers’ methods and reported prices. Furthermore, as the private market has continued to evolve, the rebate program has not adequately addressed how more recent financial arrangements, such as those between manufacturers and PBMs, should be accounted for in manufacturers’ reported prices. In addition, oversight by CMS and OIG has been inadequate to ensure that manufacturer-reported prices and methods are consistent with the law, rebate agreement, and CMS program memoranda. Because rebates rely on manufacturer-reported prices, adequate program oversight is particularly important to ensure that states receive the rebates to which they are entitled. To help ensure that the Medicaid drug rebate program is achieving its objective of controlling states’ Medicaid drug spending, we recommend that the Administrator of CMS take the following two actions: Issue clear guidance on manufacturer price determination methods and the definitions of best price and AMP, and update such guidance as additional issues arise. Implement, in consultation with OIG, systematic oversight of the price determination methods employed by pharmaceutical manufacturers and a plan to ensure the accuracy of manufacturer-reported prices and rebates paid to states. We received written comments on a draft of this report from HHS, which incorporated comments from CMS and OIG. (See app. I.) HHS concurred, in part, with our recommendation that CMS issue clear guidance on price determination methods, noting agreement that such guidance would help manufacturers, particularly with regard to accounting for sales to PBMs. HHS stated that those issues would be examined and an assessment made about where more guidance was needed. HHS noted that effort had been devoted to providing guidance and that CMS would examine the resources allocated to its review capabilities. In responding to our discussion of the changing pharmaceutical market, however, the comments noted that guidance could not address all current and potential arrangements in the pharmaceutical market and therefore case-by-case guidance would continue to be necessary to address specific situations. In responding to our discussion of manufacturers’ price determination methods, the comments stated that a response to our conclusion that some manufacturers’ practices could lower or raise rebates was not possible because we did not provide sufficient information on manufacturers’ practices. We believe that accurate and timely guidance could reduce the need for case-by-case determinations. Although we cannot present the detailed assumptions that various manufacturers made in interpreting and implementing program guidance, because that information is proprietary, we did provide examples of the different price determination methods and assumptions that can affect best price and AMP and, therefore, rebates. HHS concurred, in part, with our recommendation that CMS should implement systematic oversight of manufacturers’ price determination methods and a plan to ensure the accuracy of reported prices and rebates. While the comments noted that requests from manufacturers to revise their price determination methods were reviewed for adherence to current policies, the comments disagreed with our conclusion that current oversight does not ensure that prices or methods are consistent with program criteria. The comments stated that CMS subjects manufacturer- supplied data to systematic edits, that CMS has increased its referrals to OIG to examine recalculation requests, and that a regulation limiting the time frames for recalculations and recordkeeping has been published. The comments also referred to previous OIG reviews of manufacturer practices and the plans to continue such reviews. In our draft, we noted the data edits that CMS conducts, which help ensure the completeness of the data. The systematic edits, however, do not ensure the accuracy of the data. Specifically, while the edits address, for example, whether price data are submitted in the correct format, they do not ensure that prices are consistent with program criteria or that corrected prices are submitted when necessary. We also noted OIG’s ongoing work on the Medicaid drug rebate program. However, CMS’s referrals to OIG are made only when a manufacturer requests that its rebates be recalculated, so there is no ongoing review of the methods used by manufacturers. Finally, we also noted in the draft the recently issued regulation, which did not address all aspects of the program, such as determinations of best price and AMP. The actions cited in the HHS comments do not constitute adequate oversight of a program that relies on manufacturer-submitted data to determine substantial rebates owed to state Medicaid programs. Representatives from all the manufacturers that supplied us data were invited to review and provide oral comments on portions of the draft report, including the background and our discussion of manufacturers’ price determination methods. Representatives from five of the manufacturers indicated that administrative fees that manufacturers pay to PBMs do not necessarily need to be considered in the determination of best price and AMP. Some argued that the fee is a payment for services rendered and not a discount or rebate that would affect prices. Some manufacturers also noted that we did not address payments to PBMs when they are not acting as mail-order pharmacies. Others noted that CMS’s guidance with respect to payments to PBMs is particularly unclear and that CMS’s guidance has not addressed recent changes in the pharmaceutical market. Six of the manufacturers took issue with our discussion of the treatment of prompt payment discounts involving a chargeback arrangement. Several stated that CMS has not indicated that the prompt payment discount must be accounted for in the manner we described. Some manufacturers noted that they treat the situation we highlighted as two unrelated transactions to two separate purchasers, so they do not need to combine both price reductions when determining best price and AMP. Finally, six commented on the lack of clear guidance on various aspects of determining best price and AMP. Some manufacturers stated that program memoranda, which are a common CMS method of issuing guidance for the rebate program, do not have to be followed because they are not regulations. In response to manufacturers’ comments, we clarified our discussion of administrative fees paid to PBMs when they act as a mail-order pharmacies. We state that administrative fees may resemble volume-based discounts when PBMs take possession of drugs. The manufacturers did not have the opportunity to review our discussion of the changing pharmaceutical market, which addresses the broader role of PBMs in negotiating for third-party payers. With respect to our discussion of prompt payment discounts involving a chargeback arrangement, we observed in the draft that manufacturers differed in how they accounted for price reductions when determining best price and AMP, and we have clarified and expanded that discussion based on the comments we received. Both HHS and the manufacturers also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. We will then send copies of this report to the Secretary of Health and Human Services, the Administrator of CMS, the Acting Inspector General of Health and Human Services, and other interested parties. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call Marjorie Kanof at (202) 512-7114. Major contributors to this report are listed in appendix II. Major contributors to this report were Robin Burke, Martha Kelly, Ann Tynan, Helen Desaulniers, Julian Klazkin, and Jennie Apter.
To help control Medicaid spending on drugs, states receive rebates from pharmaceutical manufacturers through the Medicaid drug rebate program. Rebates are based on two prices--best price and average manufacturer price (AMP)--reported by manufacturers. Both reflect manufacturers' prices to various entities, accounting for certain financial concessions like discounts. Concerns have been raised about rising Medicaid drug spending. GAO studied (1) federal oversight of manufacturer-reported best prices and AMPs and the methods used to determine them, (2) how manufacturers' determinations of those prices could have affected rebates, and (3) how the rebate program reflects financial concessions in the private market. Current rebate program oversight does not ensure that manufacturer-reported prices or price determination methods are consistent with program criteria specified in the rebate statute, rebate agreement, and Centers for Medicare & Medicaid Services (CMS) program memoranda. In administering the program, CMS conducts only limited checks for reporting errors in manufacturer-reported drug prices. In addition, CMS only reviews the price determination methods when manufacturers request recalculations of prior rebates. In four reports issued from 1992 to 2001, the Department of Health and Human Services' (HHS) Office of Inspector General (OIG) identified several factors that limited its ability to verify the accuracy of drug prices reported by manufacturers, including a lack of clear guidance on how AMP should be calculated. In some cases, OIG found problems with manufacturers' price determination methods and reported prices. However, CMS has not followed up with manufacturers to make sure that the identified problems with prices and methods have been resolved. There was considerable variation in the methods that manufacturers used to determine best price and AMP, and some methods could have reduced the rebates state Medicaid programs received. Manufacturers are allowed to make assumptions when determining best price and AMP, as long as they are consistent with the law and the rebate agreement. The assumptions often involve the treatment of discounts and other price reductions in best price and AMP. Some manufacturers combined price reductions associated with particular sales in their price determination methods, while others accounted for the reductions separately. Separate treatment of the reductions resulted in rebates to states that in some cases were lower than they would have been had the reductions been considered together. Some manufacturers made assumptions that diverged from the rebate agreement and CMS program memoranda that could have raised rebates. States could have to repay any excess rebates if manufacturers revise their assumptions and request recalculations of prior rebates. The rebates that manufacturers pay to states are based on prices and financial concessions manufacturers make available to entities that purchase their drugs but may not reflect certain financial concessions they offer to other entities. In particular, the rebate program does not clearly address certain manufacturer payments that are negotiated by pharmacy benefit managers (PBM) on behalf of third-party payers such as employer-sponsored health plans and other health insurers. These types of financial arrangements are relatively new to the market. CMS's guidance to manufacturers has not clearly stated how manufacturers should treat these payments in their determinations of best price and AMP. Within the current structure of the rebate formula, additional guidance on how to account for these payments to PBMs could affect the rebates paid to states, although whether rebates would increase or decrease as a result, and by how much, is uncertain.
The complexity of the environment in which CMS operates the Medicare program cannot be overstated. It is an agency within the Department of Health and Human Services (HHS) but has responsibilities over expenditures that are larger than those of most other federal departments. Medicare alone ranks second only to Social Security in federal expenditures for a single program. Medicare is expected to spend nearly $240 billion in fiscal year 2001; covers about 40 million beneficiaries; enrolls and pays claims from nearly 1 million providers and health plans; and has contractors that annually process about 900 million claims. Among numerous and wide-ranging activities associated with the Medicare program, CMS must monitor the roughly 50 claims administration contractors that pay claims and establish local medical coverage policies; set tens of thousands of payment rates for Medicare- covered services from different providers, including physicians, hospitals, outpatient and nursing facilities, home health agencies, and medical equipment suppliers; and administer consumer information and beneficiary protection activities for the traditional program component and the managed care program component (Medicare+Choice plans). The providers billing Medicare—hospitals, general and specialty physicians, and other practitioners—along with program beneficiaries and taxpayers, create a vast universe of stakeholders whose interests vary widely. Not surprisingly, then, the responsibility to be fiscally prudent has made the agency that runs Medicare a lightening rod for those discontented with program policies. For example, the agency’s administrative pricing of services has often been contentious, even though a viable alternative is not easily identifiable. It is impractical for the agency to rely on competition to determine prices. The reason is that when Medicare is the dominant payer for services or products, the agency cannot use market prices to determine appropriate payment amounts, because Medicare’s share of payments distorts the market. Moreover, Medicare is prevented from excluding some providers to do business with others that offer better prices. In addition, Medicare’s public sector status means that changing program regulations requires obtaining public input. The solicitation of public comments is necessary to ensure transparency in decision-making. However, the trade-off to seeking and responding to public interests is that it is generally a time-consuming process and can thwart efficient program management. For example, in the late 1990s, HCFA averaged nearly 2 years between its publication of proposed and final rules. Consensus is widespread among health policy experts regarding the growing and unrelenting nature of the Medicare agency’s work. The Balanced Budget Act of 1997 (BBA) alone had a substantial impact on HCFA’s workload, requiring, among other things, that the agency develop within a short time frame new payment methods for different post-acute and ambulatory services. It also required HCFA to preside over an expanded managed care component that entailed coordinating a never- before-run information campaign for millions of beneficiaries across the nation and developing methods to adjust plan payments based partially on enrollees’ health status. The future is likely to hold new statutory responsibilities for CMS. For example, some reform proposals call for expanding Medicare’s benefit package to include a prescription drug benefit. As we have previously reported, the addition of a drug benefit would entail numerous implementation challenges, including the potential for the annual claims processing workload to double to about 1.8 billion a year. Tasked with administering this highly complex program, HCFA has earned mixed reviews in managing Medicare. On one hand, the agency presided over a program that is very popular with beneficiaries and the general public. It implemented payment methods that have helped constrain program cost growth and ensured that claims were paid quickly at little administrative cost. On the other hand, HCFA had difficulty making needed refinements to payment methods. It also fell short in its efforts to ensure accurate claims payments and oversee its Medicare claims administration contractors. In recent years, HCFA took steps to achieve greater success in these areas. However, the agency now faces criticism from the provider community for, in the providers’ view, a program that is unduly complex and has burdensome requirements. HCFA was successful in developing payment methods that have helped contain Medicare cost growth. Generally, over the last 2 decades, the Congress required HCFA to move Medicare away from reimbursing providers based on their costs or charges for every service provided and to use payment methods that seek to control spending by rewarding provider efficiency and discouraging excessive service use. Payment development efforts have been largely successful, but making needed refinements to payment methods remains a challenge. For example, Medicare’s hospital inpatient prospective payment system (PPS), developed in the 1980s, is a method that pays providers fixed, predetermined amounts that vary according to patient need. This PPS succeeded in slowing the growth of Medicare’s inpatient hospital expenditures. Medicare’s fee schedule for physicians, phased in during the 1990s, redistributed payments for services based on the relative resources used by physicians to provide different types of care and has been adopted by many private insurers. More recently, as required by the BBA, HCFA worked to develop separate prospective payment methods for post-acute care services—services provided by skilled nursing facilities, home health agencies, and inpatient rehabilitation facilities—and for hospital outpatient departments. Prospective payment methods can help constrain the overall growth of Medicare payments. But as new payment systems affected provider revenues, HCFA often received criticism about the appropriateness and fairness of its payment rates. HCFA had mixed success in marshaling the evidence to assess the validity of these criticisms and in making appropriate refinements to these payment methods to ensure that Medicare was paying appropriately and adequately. HCFA also had success in paying most claims within mandated time frames and at little administrative cost to the taxpayer. Medicare contractors process over 90 percent of the claims electronically and pay “clean” claims on average within 17 days after receipt. In contrast, commercial insurers generally take longer to pay provider claims. Under its tight administrative budget, HCFA kept processing costs to roughly $1 to $2 per claim—as compared to the $6 to $10 or more per claim for private insurers, or the $7.50 per claim paid by TRICARE—the Department of Defense’s managed health care program. Costs for processing Medicare claims, however, while significantly lower than other payers, are not a straightforward indicator of success. We and others have reported that HCFA’s administrative budget was too low to adequately safeguard the program. Estimates by the HHS Inspector General of payments made in error amounted to $11.9 billion in fiscal year 2000, which, in effect, raises the net cost per claim considerably. At the same time, HCFA estimated that, in fiscal year 2000, program safeguard expenditures saved the Medicare program more than $16 for each dollar spent. Taken together, these findings indicate that increasing the investment in CMS’ administrative functions is a cost that can ultimately save program dollars. However, HCFA’s payment safeguard activities have raised concerns among providers about the clarity of billing rules and the efforts providers must make to remain in compliance. To fulfill the program’s stewardship responsibilities, claims administration contractors conduct medical reviews of claims and audits of providers whose previous billings have been questionable. These targeted reviews have been a cost-effective approach in identifying overpayments. Providers whose claims are in dispute, however, have complained about the burden of reviews and audits and about the fairness of some specific steps the contractors follow. Their concerns about fairness may also emanate from the actions of other agencies involved in overseeing health care—such as the HHS Office of Inspector General and the Department of Justice—which, in the last several years, have become more aggressive in pursuing health care fraud and abuse. CMS faces a difficult task in finding an appropriate balance between ensuring that Medicare pays only for services allowed by law and making it as simple as possible for providers to treat Medicare beneficiaries and bill the program. While an intensive claims review is undoubtedly vexing for the provider involved, very few providers actually undergo such reviews. In fiscal year 2000, Medicare contractors conducted complex medical claims reviews of only 3/10 of 1 percent of physicians—1,891 out of a total of more than 600,000 physicians who billed Medicare that year.We are currently reviewing several aspects of the contractors’ auditing and review procedures for physician claims to assess how they might be improved to better serve the program and providers. Congressional concern has recently heightened regarding the regulatory requirements that practitioners serving Medicare beneficiaries must meet. Of the several studies we have under way to examine the regulatory environment in which Medicare providers operate, one study, conducted at the request of this Committee, examines ways in which explanations of Medicare rules and other provider communications could be improved. The preliminary results of our review of several information sources from selected carriers—the contractors that process physicians’ claims— indicate a disappointing performance record. In particular: Bulletins. Contractor bulletins, which are newsletters from carriers to physicians outlining changes in national and local Medicare policy, are viewed as the primary source of communication between the agency and providers. However, providers have complained that the information in these bulletins is often difficult to interpret, incomplete, and untimely. We reviewed the bulletins issued since February 2001 by nine carriers to determine, among other things, whether they included notices about four new billing procedures that were going into effect in early July 2001. The bulletins of five carriers either did not contain notices about the billing procedures until after the procedures had gone into effect or had not published this information as of mid-July. We also found that many of the bulletins contained lengthy discussions with significant technical and legalistic language. Telephone call centers. Call centers are intended to serve as another important information source for providers on a variety of matters, including clarification of Medicare’s billing rules. Contractors maintain these call centers to respond to the roughly 80,000 provider inquiries made each day. We placed about 60 calls to 5 carrier call centers to obtain answers to common questions (those found on the “Frequently Asked Questions” Web pages at various carriers’ web sites). For 85 percent of the calls placed, the answers that call center representatives provided were either incomplete (53 percent) or inaccurate (32 percent). Web sites. A third source of information for Medicare providers is the Internet. The agency imposes minimum requirements on carriers to maintain Web sites. Of 10 carrier Web sites we examined, 8 did not meet all of the Web site requirements, which include, among others, the inclusion of a frequently-asked-questions Web page and the capability for providers to send e-mail inquiries to customer service. These 8 also lacked the required links to both the CMS and Medicare Web sites. Many lacked user-friendly features: 7 did not have “site maps,” which list the Web site’s contents, and although 6 sites had search functions, only 4 worked as intended. Five sites contained outdated information. Although these results cannot be generalized to all carriers, the carriers we reviewed serve tens of thousands of physicians and the results are consistent with some of the concerns recently expressed by physicians in the Medical Group Management Practice Association. Our study, to be issued this fall, seeks to identify the actions CMS can take to ensure that carriers improve the consistency and accuracy of their communications with providers; it will also assess the adequacy of carriers’ budgets to conduct these activities. CMS faces several limitations in its efforts to manage Medicare effectively. These include divided management focus, limited capacity, lack of a performance-based management approach, and constraints impeding the agency’s ability to hold Medicare contractors accountable. CMS’ management focus is divided across multiple programs and responsibilities. Despite Medicare’s estimated $240-billion price tag and far-reaching public policy significance, there is no official whose sole responsibility it is to run the Medicare program. In addition to Medicare, the CMS Administrator and senior management are responsible for oversight of Medicaid and the State Children’s Health Insurance Program. They also are responsible for individual and group insurance plans’ compliance with standards in the Health Insurance Portability and Accountability Act of 1996 in states that have not adopted conforming legislation. Finally, they must oversee compliance with federal quality standards for hospitals, nursing homes, home health agencies, and managed care plans that participate in Medicare and Medicaid, as well as all of the nation’s clinical laboratories. The Administrator is involved in the major decisions relating to all of these activities; therefore, time and attention that would otherwise be spent meeting the demands of the Medicare program are diverted. A restructuring of the agency in July 1997 inadvertently furthered the diffusion of responsibility across organizational units. The intent of the reorganization was to better reflect a beneficiary-centered orientation throughout the agency by dispersing program activities across newly established centers. However, after the reorganization, many stakeholders claimed that they could no longer obtain reliable or timely information. In addition, HCFA’s responsiveness was slowed by the requirement that approval was needed from several people across the agency before a decision was final. The recent change from HCFA to CMS reflects more than a new name. It consolidates major program activities: the Center for Medicare Management will be responsible for the traditional fee-for-service program; the Center for Beneficiary Choices will administer Medicare’s managed care program. We believe that this new structure is consistent with the desire to be more responsive to program stakeholders. As we and others have consistently noted, the agency’s capacity is limited relative to its multiple, complex responsibilities. Human capital limitations and inadequate information systems hobble the agency’s ability to carry out the volume of claims administration, payment, and pricing activities demanded of it. Staff shortages—in terms of skills and numbers—beset the agency that runs Medicare. These shortages were brought into sharp focus as HCFA struggled to handle the number and complexity of BBA requirements. When the BBA expanded the health plan options in which Medicare beneficiaries could enroll, HCFA’s staff had little previous experience overseeing these diverse entities, such as preferred provider organizations, private fee-for-service plans, and medical savings accounts. Few staff had experience in dealing with the existing managed care option—health maintenance organizations. Half of HCFA’s regional offices lacked managed care staff with clinical backgrounds—important in assessing quality of care issues—and few managed care staff had training or experience in data analysis—key to assessing plan performance against local and national norms and monitoring trends in plan performance over time. At the same time, CMS faces the potential loss of a significant number of staff with valuable institutional knowledge. In February 2000, the HCFA Administrator testified that more than a third of the agency’s current workforce was eligible to retire within the next 5 years and that HCFA was seeking to increase “its ability to hire the right skill mix for its mission.” As we and others have reported, too great a mismatch between the agency’s administrative capacity and its designated mandate could have left HCFA, and now CMS, unprepared to handle Medicare’s future population growth and medical technology advances. To assess its needs systematically, CMS is conducting a four-phase workforce planning process that includes identifying current and future expertise and skills needed to carry out the agency’s mission. HCFA initiated this process using outside assistance to develop a comprehensive database documenting the agency’s employee positions, skills, and functions. Once its future workforce needs are identified, CMS faces the challenge of attracting highly qualified employees with specialized skills. Due to the rapid rate of change in the health care system and CMS’ expanding mission, the agency’s existing staff may not possess the needed expertise. Another constraint on agency effectiveness has been inadequate information systems for running the Medicare program. Ideally, program managers should be able to rely on their information systems to monitor performance, develop policies for improvement, and track the effects of newly implemented policies. In reality, most of the information technology HCFA relied on was too outdated to routinely produce such management information. As a result, HCFA could not easily query its information systems to obtain prompt answers to basic management questions. Using its current systems, CMS is not in a position to report promptly to the Congress on the effects of new payment methods on beneficiaries’ access to services and on the adequacy of payments to providers. It cannot expeditiously determine the status of debt owed the program due to uncollected overpayments. To encourage a greater focus on results and improve federal management, the Congress enacted the Government Performance and Results Act of 1993 (GPRA)—a results-oriented framework that encourages improved decision-making, maximum performance, and strengthened accountability. Managing for results is fundamental to an agency’s ability to set meaningful goals for performance, to measure performance against those goals, and to hold managers accountable for their results. As late as January 1998, we reported that HCFA lacked an approach consistent with GPRA to develop a strategic plan for its full range of program objectives. Since then, the agency developed a plan, but it did not tie global objectives to management performance. Last month, we reported on the results of our survey of federal managers at 28 departments and agencies on strategic management issues. The proportion of HCFA managers who reported having output, efficiency, customer service, quality, and outcome measures was significantly below that of other government managers for each of the performance measures. HCFA was the lowest-ranking agency for each measure—except for customer service, in which it ranked second from the lowest. In addition, the percentage of HCFA managers who responded that they were held accountable for results to a great or very great extent—42 percent—was significantly lower than the 63 percent reported by the rest of the government. Constraints on the agency’s flexibility to contract for claims administration services have also frustrated efforts to manage Medicare effectively. Under these constraints, the agency is at a disadvantage in selecting the best performers to carry out Medicare’s claims administration and customer service functions. At Medicare’s inception in the mid-1960s, the Congress provided for the government to use existing health insurers to process and pay physicians’ claims and permitted professional associations of hospitals and certain other institutional providers to “nominate” their claims administration contractors on behalf of their members. At that time, the American Hospital Association nominated the national Blue Cross Association to serve as its fiscal intermediary. Currently, the Association is one of Medicare’s five intermediaries and serves as a prime contractor for member plans that process over 85 percent of all benefits paid by fiscal intermediaries. Under the prime contract, when one of the local Blue plans declined to renew its Medicare contract, the Association—rather than HCFA—chose the replacement contractor. This process effectively limited HCFA’s flexibility to choose the contractors it considered most effective. HCFA also considered itself constrained from contracting with non-health insurers for the various functions involved in claims administration because it did not have clear statutory authority to do so. As noted, the Congress gave HCFA specific authority to contract separately for payment safeguard activities, but for a number of years the agency has sought more general authority for “functional contracting,” that is, using separate contractors to perform functions such as printing and mailing and answering beneficiary inquiries that might be handled more economically and efficiently under one or a few contracts. HCFA sought other Medicare contracting reforms, such as express authority for the agency to pay Medicare contractors on an other-than-cost basis, to provide incentives that would encourage better performance. Although the health care industry has grown and transformed significantly since Medicare’s inception, neither the program nor the agency that runs it has kept pace. Nevertheless, CMS is expected to make Medicare a prudent purchaser of services using private sector techniques and improve its customer relations. Private insurance has evolved over the last 40 years and employs management techniques designed to improve the quality and efficiency of services purchased. In a recent study, an expert panel convened by the National Academy of Social Insurance (NASI) suggested that Medicare test private insurers’ practices designed to improve the quality and efficiency of care and determine whether these practices could be adapted for Medicare. Private insurers have taken steps to influence utilization and patterns of service delivery through efforts such as beneficiary education, preferred provider networks, and coordination of services. They are able to undertake these efforts, in part, because they have wide latitude in how they run their businesses. In contrast, federal statutory requirements and the basic obligation to be publicly accountable have hampered agency efforts to incorporate private sector innovations. Medicare’s efforts to encourage use of preferred providers is a case in point. The Medicare statute generally allows any qualified provider to participate in the program. This is significant in light of HCFA’s experiment related to coronary artery bypass graft surgery in which certain hospitals—identified as those with the best outcomes for these surgeries—were designated to receive bundled payments for hospitals and physicians delivering certain expensive procedures. The experiment cut program costs by 10 percent for the 10,000 coronary artery bypass surgeries performed and saved money for beneficiaries through reduced coinsurance payments. HCFA began a similar experiment at selected acute-care hospitals, which involves bundling payments for hospital, physician, and other health care professionals’ services provided during a beneficiary’s hospital stay for selected cardiovascular and orthopedic procedures. However, more wide-scale Medicare implementation of such hospital and physician partnership arrangements may be difficult. Providers have raised concerns about government promotion of certain providers at the expense of others, thus creating a barrier to this and other types of preferred provider arrangements. Efforts to facilitate disease management provide another example of the potential limitations of adapting private sector management strategies to Medicare. HCFA was able to implement broad-based education efforts to encourage the use of Medicare-covered preventive services, but the agency could be deterred in approaches targeting individual beneficiaries most likely to need the help. For example, the agency has overseen the dissemination of more than 23,000 posters with tear-off sheets that beneficiaries can hand to physicians to facilitate discussions of colon cancer screening that otherwise might be avoided because of unfamiliar terms and sensitive issues. It has also been involved in a multifaceted effort to increase flu vaccinations and mammography use. However, the agency may be less able to undertake the more targeted approaches of some private insurers, such as mailing reminders to identified enrollees about the need to obtain a certain service. Because targeting information would require using personal medical information from claims data, CMS could encounter opposition from those who would perceive such identification to be government intrusion. Providers might also object to a government insurance program advocating certain medical services for their patients. In its study, NASI concluded that these and other innovations could have potential value for Medicare but would need to be tested to determine their effects as well as how they might be adapted to reflect the uniqueness of Medicare as both a public program and the largest single purchaser of health care. In addition, CMS would likely need new statutory authority to broadly implement many of the innovations identified in the NASI study. Congressional concern has heightened recently regarding the regulatory burden on the practitioners that serve Medicare beneficiaries. In his testimony before the Senate Committee on Finance, the Secretary of HHS emphasized the importance of communication between CMS and providers, stating, “When physicians call us…we need to respond quickly, thoroughly and accurately.” Under the spotlight held by both the Congress and the Administration, CMS is expected to improve its customer service to the provider community. Concern about regulatory burden is not limited to providers in Medicare’s traditional fee-for-service program. Policymakers are also concerned about the regulatory burden on health plans that participate in the Medicare+Choice program. During each of the last 3 years, substantial numbers of health plans reduced the geographic areas they served or terminated their Medicare participation altogether. Cumulatively, these withdrawals affected more than 1.6 million beneficiaries who either had to return to the fee-for-service program or switch to a different health plan. Industry representatives have attributed the withdrawals, in part, to Medicare+Choice requirements that they characterize as overly burdensome. HCFA took steps to address plans’ regulatory concerns modifying some requirements or delaying their implementation. It also launched an initiative designed to help the agency better understand plans’ concerns, assess them, and recommend appropriate regulatory changes. At the request of the House Ways and Means Subcommittee on Health, we are evaluating Medicare+Choice requirements. Our study will compare Medicare+Choice requirements with the requirements of private accrediting organizations and those of the Office of Personnel Management for plans that participate in the Federal Employees Health Benefits Program. The study’s objective is to document differences in these sets of requirements and determine whether these differences are necessary because of the unique nature of the Medicare program and the individuals it serves. CMS is also expected to improve communications with beneficiaries, particularly as the information pertains to Medicare+Choice health plan options. The agency has made significant progress in this regard but continues to face challenges in meeting the sometimes divergent needs of plans and beneficiaries. As required by the BBA, HCFA began a new National Medicare Education Program (NMEP). For 3 years the agency has worked to educate beneficiaries and improve their access to Medicare information. It added summary health plan information to the Medicare handbook and increased the frequency of its distribution from every few years to each year. It also established a telephone help line and an Internet Web site with comparative information on health plans, Medigap policies, and nursing homes and sponsored local education programs. Beginning this fall, it will become more important for beneficiaries to be aware that Medicare+Choice health plan alternatives to the traditional fee- for-service program may be available in their area and to understand each option and its implications. As required by the BBA, Medicare will now have an annual open enrollment period each November when beneficiaries must select either the fee-for-service program or a specific Medicare+Choice plan for the following calendar year. Beneficiaries will have strictly limited opportunities for changing their selection outside of the open enrollment period, a provision known as “lock-in.” CMS recently announced that it would fund a $35 million advertising campaign this fall to help beneficiaries learn about Medicare’s new features—such as the proposed discount prescription drug card program, coverage for preventive services and medical screening examinations, and the annual enrollment and lock-in provisions—and provide general information about Medicare+Choice plans and the availability of Medicare’s Web site and telephone help line. The agency will also extend the operating hours of the help line and add an interactive feature to the Web site designed to help beneficiaries select the Medicare option that best fits their preferences. CMS has made other decisions about the fall information campaign that illustrate the sometimes difficult trade-off between accommodating plans and serving beneficiaries. To encourage health plan participation in the Medicare+Choice program, CMS has allowed plans additional time to prepare their 2002 benefit proposals. This extension will hamper the ability of CMS and health plans to disseminate information before the BBA-established November open enrollment period. CMS will not, for example, include any information about specific health plans in the annual handbook mailed to Medicare households. To reduce the potentially adverse effects of an abbreviated fall information campaign, the agency will allow health plans to distribute marketing materials with proposed benefit package information marked “pending Federal approval.” CMS will also extend the open enrollment period through the end of December. Medicare is a popular program that millions of Americans depend on to cover their essential health needs. However, the management of the program is not always responsive to beneficiary, provider, and taxpayer expectations. CMS, while making improvements in certain areas, may not be able to meet these expectations effectively without further congressional attention to the agency’s multiple missions, limited capacity, and constraints on program flexibility. The agency will also need to do its part by implementing a performance-based management approach that holds managers accountable for accomplishing program goals. These efforts will be critical in preparing the agency to meet the management challenges of administering a growing program and implementing future Medicare reforms. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or other Committee Members may have. For more information regarding this testimony, please contact me at (202) 512-7114, Leslie G. Aronovitz at (312) 220-7600, or Laura Dummit at (202) 512-7119. Under the direction of James Cosgrove and Geraldine Redican- Bigott, contributors to this statement were Susan T. Anthony, Carolyn Manuel-Barkin, Hannah Fein, William Hadley, Don Kittler, Christie Turner, and Margaret Weber. Medicare Management: Current and Future Challenges (GAO-01-878T, June 19, 2001. Medicare Reform: Modernization Requires Comprehensive Program View (GAO-01-862T, June 14, 2001). Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies (GAO-01-592, May 25, 2001). Medicare: Opportunities and Challenges in Contracting for Program Safeguards (GAO-01-616, May 18, 2001.) Medicare Fraud and Abuse: DOJ Has Improved Oversight of False Claims Act Guidance (GAO-01-506, Mar. 30, 2001). Medicare: Higher Expected Spending and Call for New Benefit Underscore Need for Meaningful Reform (GAO-01-539T, Mar. 22, 2001). Major Management Challenges and Program Risks: Department of Health and Human Services (GAO-01-247, Jan. 2001). High Risk: An Update (GAO-01-263, Jan. 2001). Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives (GAO/HEHS-00-197, Sept. 28, 2000). Medicare: Refinements Should Continue to Improve Appropriateness of Provider Payments (GAO/T-HEHS-00-160, July 19, 2000). Medicare: 21st Century Challenges Prompt Fresh Thinking About Program’s Administrative Structure (GAO/T-HEHS-00-108, May 4, 2000). Medicare Contractors: Further Improvement Needed in Headquarters and Regional Office Oversight (GAO/HEHS-00-46, Mar. 23, 2000). Medicare Contractors: Despite Its Efforts, HCFA Cannot Ensure Their Effectiveness or Integrity (GAO/HEHS-99-115, July 14, 1999).
Management of Medicare has come under increasing scrutiny. The Health Care Financing Administration (HCFA) has had mixed success running the program. The agency has developed payment methods that have contained cost growth, and HCFA has paid fee-for-service claims quickly and at low administrative cost. However, HCFA has had difficulty ensuring that it paid claims appropriately. In addition, Medicare claims administration contractors have done a poor job of communicating with Medicare providers. HCFA has taken important steps to address some of these shortcomings, including strengthening payment safeguards, but several factors have hampered its efforts. Despite its growing responsibilities, HCFA suffers from staffing shortages. The agency also continues to rely on archaic computer systems. At the same time, HCFA has faltered in its attempts to adopt a results-based approach to agency management. Constraints on the agency's contracting authority have limited its use of full and open competition to select claims administration contractors and assign administrative tasks. Rising expectations among Medicare beneficiaries and providers are putting pressure on the Centers for Medicare and Medicaid Services to modernize and improve agency operations. Such improvements will require HCFA to begin a performance-based management approach that holds managers accountable for achieving program goals. Congressional attention also appears warranted if Medicare is to meet the challenges of the 21st century.
In 2003, HSPD-7 established a national policy for critical infrastructure and key resources. HSPD-7 designated DHS as the agency responsible for coordinating the nation’s efforts to protect critical infrastructure. The Office of Infrastructure Protection (IP) within DHS fulfills the functions associated with managing and coordinating the national protection efforts. In June 2006, DHS issued the first NIPP as required by HSPD-7. The NIPP provides a risk management framework and sector partnership model for developing, implementing, and maintaining a coordinated national effort to manage the risks to critical infrastructure. FPS is the lead agency for the government facilities sector (the sector), and assumes multiple roles and responsibilities for the sector, which is comprised of a wide variety of facilities and assets owned or leased by federal, state, local, tribal, or territorial governments, located both domestically and overseas. Under the NIPP risk management framework, FPS is responsible for leading and coordinating six major elements sector-wide to identify, prioritize, and measure progress towards protecting critical infrastructure. See figure 1. Additionally, the NIPP sector partnership model calls on FPS to form and chair a government coordinating council comprised of representatives from different levels of government to share activities, policy, and FPS also participates in or interacts with the following communications.cross-sector councils, which facilitate relationships within and among the 18 sectors: The State, Local, Tribal, and Territorial Government Coordinating Council (SLTTGCC), which coordinates with non-federal government organizations across all 18 sectors; the Regional Consortium Coordinating Council, which represents a variety of distinct collaborative efforts between state, local, and private sector partners focused on critical infrastructure found in multistate regions or within a given city; the NIPP Federal Senior Leadership Council, which is a DHS-chaired council that consists of federal department and agency representatives from lead agencies named in HSPD-7; and the Critical Infrastructure Partnership Advisory Council, which is a partnership between government and private sector owners and operators of critical infrastructure to effectively coordinate federal protective programs. The NIPP requires each lead agency to develop and revise a sector- specific plan that addresses critical infrastructure protection. FPS has responsibility for updating the plan to adequately represent the sector and involve Council members. Every 4 years, FPS must: identify gaps between the plan and guidance from IP, policy changes, and best practices; identify and develop a consolidated list of actions required to close gaps; obtain, and incorporate input from sector partners and the Council in obtain final approval from IP and release the plan to sector partners and the Council. FPS and DHS issued the first sector-specific plan in 2007 and an update to the plan in 2010, in which they identified goals and objectives for the sector, shown in table 1. As the lead agency, HSPD-7 also requires FPS to provide the Secretary of Homeland Security with annual reports to assess progress and effectively prioritize sector-specific activities and gaps, among other things. This process involves consulting the Council, similar to the 2010 update to the plan. FPS’s role as the lead agency for the sector is an additional duty beyond its traditional role of protecting over 9,000 owned or leased facilities under the custody and control of GSA. As part of its mission, FPS conducts risk assessments, recommends countermeasures, and performs law enforcement activities, such as incident response. FPS’s activities are funded by security fees collected from tenant federal agencies. As such, FPS charges each tenant agency a basic security fee per square foot of space occupied in a GSA facility, among other fees. The Interagency Security Committee (ISC), which was established in 1995, develops policies and standards and facilitates security-related information exchanges. While domestic non-military federal facilities— whether federally owned, leased, or managed—are required to adhere to the ISC standards, these standards do not apply to state, local, tribal, and territorial government facilities. ISC membership consists of over 100 senior executives from 51 federal agencies and departments, including FPS. DHS is responsible for chairing the ISC and is authorized to monitor federal agencies’ adherence to ISC standards. FPS’s leadership has not resulted in implementation of a risk management approach for the sector, as called for under the NIPP framework. Specifically, a lack of facilities data, risk assessments, and effective metrics and performance data undermine the implementation of a risk management approach. Under FPS’s leadership, effective partnerships have also not developed. FPS faces challenges in leading the sector linked to the sector’s size, diversity, and FPS’s fee-based revenue structure. These challenges are compounded by the lack of an action plan. Lack of facilities data: Asset identification is a crucial element for risk management as outlined by both the NIPP framework and the 2010 plan. According to the 2010 plan, the sector’s assets and systems must be identified to determine which of these, if damaged, would result in significant consequences for national economic security, public health or safety, morale, or governance. The 2010 plan also states that identifying and obtaining appropriate data on government facilities located domestically and overseas is a sector objective. However, FPS officials said that they have not identified or obtained data on federal, state, local, tribal and territorial government facilities for the sector. According to FPS officials, developing sector-wide data may be untenable and unwarranted, because most federal, state, local, tribal, and territorial government facilities do not meet the threshold established by IP for the most critical infrastructure and government facilities generally remain the same year after year. Yet, the 2010 plan states that several circumstances may require frequent updates to data on government facilities, including changes in threat levels, large-scale facility renovations, or the identification of a facility as supporting a nationally critical function or critical asset. Moreover, the 2011 annual report states that functions carried out in one government facility often directly support the functions under way in many other government facilities. Thus, an incident at one facility could have cascading impact across a range of functions essential to governance. Without appropriate data on government facilities, FPS has limited awareness of the potentially evolving universe of government facilities as well as the interdependencies that may exist in the sector. As a result, FPS may be overlooking facilities whose failure or degradation could pose significant harm to the nation’s security, health, economy, or morale. While FPS officials said that they have neither identified nor obtained data on the sector, FPS has contributed to the development of a database maintained by IP, the IP Gateway / Infrastructure Information Collection System. IP uses this database to identify critical infrastructure assets and systems. According to FPS officials, they periodically review and cross reference the information contained within the database against the dataset that FPS uses as part of its role of protecting federal facilities. However, FPS’s data do not encompass the full spectrum of sector facilities, in particular non-federal facilities. In addition, we have previously identified problems with FPS’s data, such as a lack of data on building jurisdictional authorities. Consequently, FPS’s efforts to corroborate the data contained within the IP Gateway / Infrastructure Information Collection System are undermined by the limited scope and quality of its data. To the extent that the IP Gateway / Infrastructure Information Collection System is used to prioritize critical infrastructure, this effort may also be detrimentally affected by weaknesses in FPS’s data. No sector-wide risk assessments: FPS is not currently positioned to assess risk across the sector. Assessing risks and coordinating risk assessment programs are another key element of the NIPP framework and a sector objective. The plan and annual reports provide information about the principles of threat, vulnerability, and consequence as well as discuss different types of risks and threats faced by government facilities, but no standardized tool for performing risk assessments exists at the federal level, much less the state, local, tribal, and territorial levels. FPS promoted its Risk Assessment and Management Program (RAMP) as a risk assessment tool in the 2010 plan and sector annual reports, as well as in past Council meetings. However, the scope of RAMP was not originally intended to address non-federal facilities and has never become fully operational. Therefore, its usefulness as a sector-wide risk assessment tool is not clear. In fact, RAMP has been terminated according to a senior FPS official, and FPS is working on developing a replacement. According to this official, a new risk assessment tool and methodology will be released for use by sector partners at a future, unspecified date. FPS officials acknowledged the absence of a sector- wide risk assessment. Without this, FPS cannot prioritize facilities or implement protective programs, both activities predicated on effective risk assessment. No effective metrics and performance data: FPS has not established effective metrics and performance data, which hampers its ability to monitor the sector’s progress toward the sector goal of implementing a long-term government facility risk management program as described in the 2010 plan. An effective metric is one that can adequately indicate progress toward a goal and that is objective, measureable, and quantifiable. Data to track metrics need to be sufficiently timely, complete, accurate, and consistent. Further, DHS has established guidance on metrics to assess improvements in the protection and resiliency of critical infrastructure, which lead agencies can use to guide these efforts in their respective sectors. We have reported that without effective performance data, decision makers may not have sufficient information to evaluate whether investments have improved security and reduced a facility’s vulnerability, or to determine funding priorities within and across agencies. In 2000, FPS transitioned all alarm-monitoring and dispatching capabilities from several regional control centers to four MegaCenters. Currently, each MegaCenter monitors multiple types of alarm systems, closed circuit television, and wireless dispatch communications within federal facilities throughout the nation. These centers—located in Michigan, Colorado, Pennsylvania, and Maryland—are equipped with state-of-the-art communication systems and operate continuously. terms of timeliness, completeness, accuracy, or consistency. Until it establishes quantifiable metrics and performance data, FPS will be unable to gauge progress toward implementing a risk management approach, specifically, and the protection or resiliency of critical government facilities, overall. To effectively implement the NIPP and achieve the goals of the sector, partnerships are essential. As previously discussed, the NIPP sector partnership model integrates partners into the planning and operational activities for a collaborative approach to the protection of critical infrastructure. Likewise, the 2010 plan places a significant emphasis on the role of partnerships. However, of the 16 Council members we contacted, 13 indicated that they had little or no involvement in developing the sector plan and annual reports, and for at least 8 agencies these documents were of negligible value. To offset low Council member response, FPS officials reported relying on open source information (e.g., annual federal budget) to develop the annual report. Relying primarily on open source information does not fully or effectively leverage the knowledge and experience of Council members, potentially undermining the value of the plan as a means to promote collaboration in critical infrastructure protection. Consequently, this key coordination goal of the 2010 plan has not been met, and as a result, FPS is limited in its ability, as lead agency for the sector, to productively contribute to the larger DHS effort to prioritize and safeguard the nation’s most critical infrastructure. FPS’s role as lead agency for the government facilities sector is particularly critical because according to the 2011 annual report, government facilities have been the most frequently attacked sector since 1968 and the sector involves a very dynamic threat environment. FPS’s compilation of reports that hold little value for sector partners, leaves FPS and its sector partners less able to engage in a comprehensive risk management framework that addresses this threat environment. Furthermore, while FPS chairs the Council, the principle mechanism for engaging partners, FPS has not involved the full spectrum of sector partners. FPS officials said that they use an informal process to manage the Council membership and have repeatedly reported that they actively seek to add members to expand state and local representation. Of the Council members identified by FPS, 21 of the 26 are federal agencies, 3 are state or local agencies, and 2 are non-governmental organizations. Officials from all 5 state and local government and non-governmental organizations told us that they were either unaware or did not consider themselves to be members of the Council. Furthermore, the Council currently has no representation from tribal and territorial governments. Having active representation from state, local, tribal, and territorial governments on the Council would be particularly helpful, given that FPS’s interaction with the cross-sector councils that represent these perspectives has been limited or non-existent. As previously discussed, the SLTTGCC provides all 18 sectors a mechanism to coordinate with non-federal government organizations. According to the 2010 plan, the SLTTGCC had liaisons who were fully integrated into the Council. However, both SLTTGCC officials and FPS officials indicated that there has been limited interaction. During our review, FPS reached out to the SLTTGCC to discuss opportunities to increase partnering activities. FPS officials reported having never worked with the Regional Consortium Coordinating Council, which includes state and local government representatives. With limited representation on the Council and little or no interaction with certain cross-sector councils, the sector is missing opportunities to engage and integrate the experience, knowledge, and priorities of state, local, tribal and territorial partners into the plan to help ensure buy-in for protecting critical infrastructure across all levels of government. Moreover, the Council has become progressively less active over the years. According to the 2011 annual report, the lead agency convenes Council meetings quarterly and communicates information about threats, incidents, and effective protection-related practices to sector partners. However, Council members indicated that the frequency of meetings has steadily declined over the years. In 2011, FPS held only one meeting in January; its next meeting was held in May 2012. No working groups or other activities occurred in the interim. At the 2011 meeting, there was a total of four non–DHS Council members who attended. FPS’s May 2012 Council meeting may have reflected increased interest, with 14 agencies other than DHS in attendance. However, only one attendee represented state, local, tribal or territorial governments; all other attendees were from federal agencies. FPS officials acknowledged that participation of Council members has been decreasing every year. Most Council members representing federal agencies said that interaction with the sector had not been helpful since their agencies actively participate in the ISC, which provides the guidance their agencies need to meet federal physical security standards. Nevertheless, some Council members said that the sector was valuable as a resource for coordinating security activities and potentially developing a uniform risk assessment tool. Since the sector covers a larger and broader set of government facilities than the ISC— such as military, state, local, tribal and territorial facilities—the potential benefits of collaboration, as discussed earlier, could lead to a more comprehensive approach to protecting critical government facilities. FPS has identified its limited resources as a significant challenge to leading a sector as large and diverse as the government facilities sector. The 2010 plan states that the sector includes more than 900,000 federal assets, as well as assets from 56 states and territories; more than 3,000 counties; 17,000 local governments; and 564 federally recognized tribal nations. In addition, these facilities represent a wide variety of uses, both domestically and overseas, ranging from office buildings and courthouses to storage facilities and correctional facilities. FPS officials indicated that they have very limited staffing and no dedicated line of funding for activities related to leading the sector, and it was unclear if FPS’s security fees could be used to cover the costs of serving as the lead agency for the sector. Because of limited resources, FPS officials said that they could only meet the NIPP’s minimum reporting requirements and did not engage in other activities that could address the issues discussed earlier. For example, FPS officials said that they abandoned efforts related to strategic communications and marketing as described in the 2010 annual report, aimed at increasing awareness and participation across the sector because of resource constraints. FPS reported in 2010 that it did not have the capability to plan for any sector-specific agency investments. In 2011, FPS had less than one full-time equivalent employee engaged in sector- specific agency activities, which represents a decline from prior years when FPS had a full-time equivalent employee and several contract employees assisting with its sector responsibilities. As discussed above, FPS is funded using a fee-based structure in which it collects funds from federal tenant agencies for security and protective services. We have previously reported that FPS’s involvement in homeland security activities not directly related to facility protection is inconsistent with a requirement in the Homeland Security Act of 2002 that FPS use funding from the fees it collects solely for the protection of federal government buildings and grounds. We recommended to DHS that if FPS continues its involvement in activities not directly related to the protection of federal buildings and grounds, a funding process would be needed that is consistent with the requirement regarding the use of funds from agency rents and fees. Notwithstanding issues related to how its fees may be used, FPS has not fully assessed the resource requirements for serving as the lead sector agency, because it has not completed an action plan or cost estimate for carrying out the 2010 plan. The 2010 plan states that determining the sector’s priorities, program requirements, and funding needs for government facility protection is a sector objective. FPS previously reported it was developing an action plan to guide its implementation of the 2010 plan, but according to FPS officials, they are no longer pursuing this, because identifying steps FPS can and will take is difficult without knowing what funding or resources are available. FPS officials also told us that they originally estimated the cost of serving as the lead agency to be around $1 million, but did not provide us with the analysis to support this estimate. According to DHS officials, HSPD-7 is in the process of being updated to reassess how the NIPP and the sectors are overseeing the protection of critical infrastructure, which may result in the sector being restructured. For example, according to DHS, GSA, and Department of the Interior officials, GSA will become a co-lead agency, the monuments and icons sector will be subsumed within the government facilities sector, and an executive committee that includes the ISC may be formed to help advise the sector. Such changes may affect FPS’s workload and resources as the lead agency. An action plan could help FPS and DHS refocus efforts in the sector. We have recommended that agencies leading intergovernmental efforts use an action plan to establish priorities, provide rationale for resources, and to propose strategies for addressing challenges.enable FPS and DHS to manage change by prioritizing the activities required of the sector’s lead agency and identifying those activities that can be feasibly carried out by FPS given its current resource constraints. An action plan may also be useful to FPS for justifying additional resources, which may help address the challenge posed by its fee-based revenue structure. FPS is responsible for leading efforts to identify, prioritize, and protect critical government facilities across all levels of government under the NIPP. The loss of critical government facilities and the people who work in them because of terrorism, natural hazards, or other causes could lead to catastrophic consequences. The lack of facility information, the absence of sector-wide risk assessments, and ineffective metrics and data undermine the implementation of a risk management approach as outlined by the NIPP risk management framework and envisioned in the 2010 plan. In addition, FPS has not effectively employed the NIPP sector partnership model to engage the Council and represent the depth, breadth, and interests of the sector, particularly non-federal partners. Consequently, key goals of the 2010 plan have not been met, and FPS is limited in its ability to productively contribute to the larger DHS effort to prioritize and safeguard the nation’s most critical infrastructure. According to DHS officials, structural changes to the sector may already be under way. Yet, FPS and DHS do not have an informed understanding of the priorities and resources needed to fulfill the lead agency responsibilities, and structural changes may affect these priorities and available resources. An action plan could serve as a valuable tool for FPS and DHS to identify, in tandem with any structural changes, priorities that can be feasibly achieved and the associated resource requirements given FPS’s fee-based revenue structure. This may, in turn, help address the overall limited progress made to date in the sector with implementing a risk management approach and developing effective partnerships. To enhance the effectiveness of the government facilities sector, we recommend that the Secretary of DHS direct FPS, in partnership with IP and Council members, to develop and publish an action plan that identifies sector priorities and the resources required to carry out these priorities. With consideration of FPS’s resource constraints, this plan should address FPS’s limited progress with implementing a risk management approach and developing effective partnerships within the sector. The plan should address, at a minimum, steps needed to: 1. develop appropriate data on critical government facilities; 2. develop or coordinate a sector-wide risk assessment; 3. identify effective metrics and performance data to track progress toward the sector’s strategic goals; and 4. increase the participation of and define the roles of non-federal Council members. We provided a draft report to DHS, GSA, Department of Education, Department of Health and Human Services, Department of State, National Archives and Records Administration, National Aeronautics and Space Administration, National Institute of Standards and Technology, Department of the Interior, Environmental Protection Agency, and Department of Justice. DHS concurred with our recommendation to develop and publish an action plan for the sector. DHS’s full comments are reprinted in appendix II. The National Archives and Records Administration also agreed with our findings. DHS, GSA, and the National Institute of Standards and Technology provided technical comments, which we considered and incorporated, where appropriate. The other agencies did not provide comments on our draft report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions on this report, please contact me at (202) 512-2834 or GoldsteinM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and key contributors to the report are listed in appendix III. To assess the Federal Protective Service’s (FPS) leadership of the government facilities sector, we reviewed Homeland Security Presidential Directive 7, Department of Homeland Security’s (DHS) National Infrastructure Protection Plan (NIPP), and the 2010 Government Facilities Sector-Specific Plan (the 2010 plan). Based on these documents, we identified the implementation of a risk management approach and development of effective partnerships as two key activities of the NIPP and the 2010 plan that lead agencies are responsible for. These activities form the foundation for identifying, prioritizing, and protecting critical infrastructure. We reviewed the outcomes reported in the 2010 and 2011 sector annual reports to determine FPS’s actions, and identified gaps between these actions and the goals and activities in the 2010 plan. We reviewed prior GAO reports and DHS Office of Inspector General reports on critical infrastructure to identify any challenges that FPS faces in leading the implementation of the 2010 plan and key practices on establishing performance metrics and interagency collaboration. In addition, we interviewed FPS officials in Washington, D.C., about the 2010 plan, its sector-related activities as the lead agency, and any challenges to implementing the plan. We interviewed DHS officials from the Office of Infrastructure Protection and Interagency Security Committee about their role as sector partners and their interaction with FPS as the lead agency. We also interviewed members from the sector’s Council about their role and participation in the Council and their interaction with FPS. We selected 16 of the 26 members of the Council based on several criteria, including their level of activity as determined by contributions to the 2010 plan and sector annual reports, or participation in the 2011 Council meeting, and all 5 of the state and local government members, and non-governmental organization members. Among federal members of the Council, we also selected federal agencies that served as the lead agencies for the monuments and icons sector, water sector, commercial facilities sector, and education subsector, and federal executive branch agencies with expertise in law enforcement or physical security applicable to the protection of government facilities. We conducted this performance audit from December 2011 to August 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, David Sausville, Assistant Director; Friendly Vang-Johnson; Jennifer DuBord; Delwen Jones; Steven Putansu; and Kathleen Gilhooly made key contributions to this report. Federal Protective Service: Better Data on Facility Jurisdictions Needed to Enhance Collaboration with State and Local Law Enforcement. GAO-12-434. Washington, D.C.: March 27, 2012. Federal Protective Service: Actions Needed to Resolve Delays and Inadequate Oversight Issues with FPS’s Risk Assessment and Management Program. GAO-11-705R. Washington, D.C.: July 15, 2011. Homeland Security: Protecting Federal Facilities Remains a Challenge for the Department of Homeland Security’s Federal Protective Service. GAO-11-813T. Washington, D.C.: July 13, 2011. Budget Issues: Better Fee Design Would Improve Federal Protective Service’s and Federal Agencies’ Planning and Budgeting for Security. GAO-11-492. Washington, D.C.: May 20, 2011. Homeland Security: Ongoing Challenges Impact the Federal Protective Service’s Ability to Protect Federal Facilities. GAO-10-506T. Washington, D.C.: March 16, 2010. Homeland Security: Greater Attention to Key Practices Would Improve the Federal Protective Service’s Approach to Facility Protection. GAO-10-142. Washington, D.C.: October 23, 2009. Homeland Security: Federal Protective Service Has Taken Some Initial Steps to Address Its Challenges, but Vulnerabilities Still Exist. GAO-09-1047T. Washington, D.C.: September 23, 2009. Homeland Security: The Federal Protective Service Faces Several Challenges That Hamper Its Ability to Protect Federal Facilities. GAO-08-683. Washington, D.C.: June 11, 2008. Homeland Security: Preliminary Observations on the Federal Protective Service’s Efforts to Protect Federal Property. GAO-08-476T. Washington, D.C.: February 8, 2008. Homeland Security: Guidance and Standards Are Needed for Measuring the Effectiveness of Agencies’ Facility Protection Efforts. GAO-06-612. Washington, D.C.: May 31, 2006. Homeland Security: Further Actions Needed to Coordinate Federal Agencies’ Facility Protection Efforts and Promote Key Practices. GAO-05-49. Washington, D.C.: November 30, 2004. Homeland Security: Transformation Strategy Needed to Address Challenges Facing the Federal Protective Service. GAO-04-537. Washington, D.C.: July 14, 2004. Critical Infrastructure Protection: DHS Has Taken Action Designed to Identify and Address Overlaps and Gaps in Critical Infrastructure Security Activities. GAO-11-537R. Washington, D.C.: May 19, 2011. Critical Infrastructure Protection: DHS Efforts to Assess and Promote Resiliency Are Evolving but Program Management Could Be Strengthened. GAO-10-772. Washington, D.C.: September 23, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. The Department of Homeland Security’s (DHS) Critical Infrastructure Protection Cost-Benefit Report. GAO-09-654R. Washington, D.C.: June 26, 2009. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Critical Infrastructure: Sector Plans Complete and Sector Councils Evolving. GAO-07-1075T. Washington, D.C.: July 12, 2007. Critical Infrastructure Protection: Sector Plans and Sector Councils Continue to Evolve. GAO-07-706R. Washington, D.C.: July 10, 2007. Critical Infrastructure: Challenges Remain in Protecting Key Sectors. GAO-07-626T. Washington, D.C.: March 20, 2007. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors’ Characteristics. GAO-07-39. Washington, D.C.: October 16, 2006. Information Sharing: DHS Should Take Steps to Encourage More Widespread Use of Its Program to Protect and Share Critical Infrastructure Information. GAO-06-383. Washington, D.C.: April 17, 2006.
U.S. government facilities have been the target of foreign and domestic terrorists. Government facilities are one of 18 critical infrastructure sectors designated under Homeland Security Presidential Directive 7 (HSPD-7). The Department of Homeland Security (DHS) is responsible for identifying, prioritizing, and coordinating the protection of critical infrastructure that, if destroyed, could have a debilitating impact on governance, the economy, national morale, or public health and safety. DHS defines critical infrastructure sector responsibilities in the National Infrastructure Protection Plan (NIPP) and the Federal Protective Service (FPS) is the lead agency for the government facilities sector. As such, FPS is to develop and implement a government facilities sector-specific plan, which was first issued in 2007 and updated in 2010, in coordination with governmental partners. In this report, GAO assesses FPS’s efforts as the lead agency for the government facilities sector. To do this, GAO reviewed HSPD-7, the NIPP, the 2010 plan and other related documents to compare FPS’s actions and the goals for the sector. GAO also interviewed DHS agency officials and 16 selected sector partners about activities for, and coordination with, the sector. The Federal Protective Service (FPS) has not been effective as the lead agency for the government facilities sector, which includes facilities at the federal, state, local, tribal and territorial level. Under the National Infrastructure Protection Plan (NIPP) and the 2010 sector-specific plan, FPS is responsible for establishing a risk management approach and developing effective partnerships for the sector. However, FPS has not implemented a risk management approach. According to FPS, it has not identified or obtained data on facilities at the federal, state, local, tribal and territorial level, which are fundamental for employing a risk management approach. In addition, despite providing information on the principles of threat, vulnerability, and consequence, FPS has not coordinated or assessed risk across government facilities, another key element of risk management. FPS also lacks effective metrics and performance data to track progress toward implementing a risk management approach and for the overall resilience or protection of government facilities. Consequently, FPS does not have a risk management approach for prioritizing and safeguarding critical government facilities. Furthermore, FPS has not built effective partnerships across different levels of government. While FPS chairs the Government Coordinating Council (the Council)—a mechanism intended to help share activities and policy across different levels of government—the Council’s membership lacks a full spectrum of sector partners, particularly non-federal. All five state and local government and non-governmental members of the Council that GAO contacted were unaware of, or did not consider themselves to be part of, the Council. FPS also has not leveraged the State, Local, Tribal and Territorial Government Coordinating Council, an existing mechanism to coordinate with non-federal government organizations, although FPS officials reported recent efforts aimed at enhancing this partnership. As the lead agency for the sector, FPS faces challenges associated with funding and its lack of an action plan. According to FPS officials, FPS has no dedicated line of funding for its activities as the lead agency and resource constraints hinder FPS’s capacity to lead this large and diverse sector, which is comprised of more than 900,000 federal assets, as well as assets from 56 states and territories; over 3,000 counties; 17,000 local governments; and 564 federally recognized tribal nations. FPS’s use of fee-based revenue to perform homeland security activities not directly related to federal facility protection is inconsistent with the Homeland Security Act of 2002. FPS does not have a full understanding of the resource requirements for serving as the lead agency, because it has not completed a cost estimate or an action plan to guide implementation of the 2010 plan. According to DHS officials, HSPD-7 will be updated, which may result in structural changes to the sector that could affect the lead agency’s responsibilities and available resources. An action plan could serve as a valuable tool for FPS and DHS to identify priorities that can be feasibly achieved and the resources required, in tandem with any potential structural changes. GAO recommends that the Secretary of DHS direct FPS, in partnership with others, to develop and publish an action plan that identifies sector priorities and resource requirements, and addresses steps needed to implement a risk management approach and develop effective partnerships. DHS concurred with the recommendation.
Uranium is a hardrock mineral, and most U.S. uranium deposits are located in the western half of the United States, specifically in the states of Arizona, Colorado, New Mexico, Texas, Utah, and Wyoming. In the United States, uranium has been primarily used as a fuel for electric power generation and for nuclear weapons. In 2010, U.S. uranium mines extracted 4.2 million pounds of uranium, 2 percent more than in 2009, according to DOE’s Energy Information Administration (EIA).domestic production of uranium is not sufficient to meet domestic demand, and the United States imports over 90 percent of its uranium from countries such as Australia, Canada, and Russia. O) extracted from federal, state, and private land. material left after the minerals are extracted—waste rock or tailings (a combination of fluid and rock particles)—is then disposed of, often in a nearby pile or tailings pond. As described earlier, reclamation activities can include reshaping and revegetating disturbed areas; measures to control erosion; and measures to isolate, remove, or control toxic materials. While uranium mining operations are similar to other hardrock mining operations in environmental concerns, the wastes produced require additional environmental controls. Of particular concern is the presence of the natural by-products of uranium radioactive decay, most notably radium and the radioactive gas radon, as well as heavy metals, such as arsenic. All of these byproducts can pose a serious risk to human health or the environment, especially if they migrate to surface or ground water, or enter the environment after transforming into dust. Uranium is extracted using one of three processes—underground mining, open pit mining, or ISR. Open pit and underground mining are generally considered conventional uranium extraction processes. In these processes, uranium ore is removed from the ground and is sent to an off- site processing facility, called a mill, where extracted uranium is concentrated into a product called yellowcake (U The optimum extraction process is determined by the size, grade, depth, and geology of an ore body. Open pit mining is generally used for ore deposits relatively close to the surface, while underground mining is generally used for deeper deposits, as shown in figure 1. Open pit mining generally involves more surface disturbance than underground mining, and the amount of waste rock removed to reach the mineral is greater. Since the early 1960s until recently, most uranium has been extracted by using conventional extraction processes. Unlike conventional extraction processes, ISR, a mining technique established in the 1970s and anticipated to become more widely used by the industry in the future, aims to extract uranium with less surface disturbance. ISR extracts uranium by injecting oxygenated water and carbon dioxide or sodium bicarbonate hundreds of feet underground to dissolve uranium located in a subsurface ore body contained within a layer of sedimentary rock. Once dissolved, the water and uranium mixture is pumped to the surface, where the uranium is captured on ion exchange resins, which are taken to a central facility to be processed into yellowcake. (See fig. 2.) ISR operations typically involve several wellfields, which are composed of many injection and production wells, and these wellfields can spread over hundreds or thousands of acres, with monitoring wells at periodic intervals above, below, and surrounding the aquifer to monitor for groundwater contamination outside the aquifer. According to industry and government documents, ISR is gaining favor as the approach to extract uranium because it is a more cost-efficient method for recovering uranium ore that causes less surface disturbance and is safer for worker health. The primary risk associated with ISR operations is the potential for contamination of nearby groundwater. When ISR operations cease, the groundwater is restored by removing and stabilizing hazardous metals, such as arsenic and selenium, which may have been disturbed by the operations, and all the wells are plugged. Experts currently do not agree on how long it will take to restore a wellfield after production ceases, or if full restoration is achievable. In a 2009 report on groundwater restoration efforts for 22 ISR wellfields on private land in Texas, the U.S. Geological Survey (USGS) found that it was difficult for these operations to restore groundwater to baseline values for heavy metals, such as uranium and selenium. Specifically, USGS reported that measured levels of uranium and selenium increased following restoration efforts in the majority of the wellfields when compared with baseline values. Three federal agencies play key roles in overseeing uranium operations on federal land: BLM, the Forest Service, and DOE. In addition, NRC, EPA, and the states are responsible for some aspects of uranium operations on federal, state, and private land. BLM. BLM manages more than 260 million acres of public lands located primarily in the western half of the United States. Under the General Mining Act of 1872 (Mining Act), an individual or corporation can establish a claim to any hardrock mineral on public land and may remove all hardrock minerals from the site. Under the Federal Land Policy and Management Act of 1976, BLM has developed and revised regulations and issued policies to prevent unnecessary or undue degradation of BLM land from hardrock operations. BLM issued regulations that took effect in 1981 that classified hardrock operations into three categories—casual use, notice-level operations, and plan- level operations—and required reclamation of the sites at the earliest feasible time. BLM issued revised regulations that took effect in 2001, to strengthen financial assurance requirements and modify the reclamation requirements, among other things. BLM delegates primary responsibility for oversight of hardrock operations to its state and local field offices. The Forest Service. The Forest Service manages approximately 193 million acres of national forests and grasslands throughout the United States. Forest Service regulations, promulgated under its Organic Act of 1897, among other laws, establish rules and procedures intended to ensure that hardrock mining operations minimize adverse environmental impacts on National Forest System surface resources. Since 1974, the Forest Service has required financial assurances for mining operations on National Forest System land. The Forest Service manages hardrock operations through its headquarters, 9 regions, 155 national forests and grasslands, and more than 600 ranger districts. DOE. DOE manages a uranium leasing program on 31 lease tracts, of which 29 are currently leased, under the authority of the Atomic Energy Act of 1954 (as amended). These lease tracts cover about 25,000 acres of land located within the Uravan Mineral Belt in southwestern Colorado. These leases generally cover a period of 10 years, and DOE offers these leases through a competitive public bid solicitation, which specifies the lease terms, including the minimum annual royalties to be collected. DOE awards these leases to those operators who offer to pay the highest royalty rate, who become known as lessees. This program began in 1948, when BLM withdrew certain uranium-rich land from the public domain, and reserved them for the use of DOE’s predecessor agency, the Atomic Energy Commission, to secure and develop a supply of domestic uranium for the nation’s defense needs. DOE manages mining activities, including exploration and extraction, associated with uranium and vanadium mining on these lands. In 2005, DOE considered an expansion of the program in the face of increased demand for uranium, and initiated an environmental assessment of the program under the National Environmental Policy Act of 1969 (NEPA). DOE subsequently issued a finding that the expansion would have no significant impact on the environment. Environmental groups challenged this finding, and in 2011 a federal court prohibited further work on the leases as well as the issuance of new leases pending completion of a new environmental analysis. DOE is in the process of developing a draft Programmatic Environmental Impact Statement that is expected to be released for public comment in late 2012. According to DOE documents, the lease program has approximately 13.5 million pounds of uranium left to mine. NRC. NRC is responsible for overseeing uranium milling operations, which produce yellowcake from uranium ore. ISR is considered a uranium milling operation by NRC because it produces yellowcake. NRC reviews ISR license applications, conducts environmental analyses and inspections, reviews decommissioning plans and activities, and oversees site reclamation and groundwater treatment. NRC can relinquish its regulatory authority to a state if the state and NRC determine that the state has a program that is adequate to protect public health and safety. NRC licenses and oversees ISR operations in Nebraska, New Mexico, and Wyoming, while the other states with major uranium deposits—Colorado, Texas, and Utah— license and oversee operations in their states. EPA and the states. EPA and the states also have a role in overseeing some aspects of uranium operations. Under the Clean Water Act, for example, EPA or the states issue permits to control pollutants that are discharged into the waters of the United States. Under the Safe Drinking Water Act, the Underground Injection Control (UIC) program is designed to protect underground sources of drinking water by prohibiting the injection of fluids beneath the surface without a permit. Specifically, ISR operations require a class III UIC permit for wells because they inject fluids to dissolve and extract uranium. Class III wells must be constructed of appropriate materials to handle the fluid being injected and must be monitored during operations. When injection activities are complete, the injection wells must be plugged. In addition, under the Superfund program, established by the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980, EPA, or, in some instances, other federal agencies if the contamination is on their land, has the authority to compel parties responsible for contaminating sites to clean them up or to clean the sites up itself and seek reimbursement. EPA places some of the most contaminated sites on the National Priority List, and resources from a federal trust fund, the Superfund, are available to pay for long-term cleanup at these sites. In addition, under the Uranium Mill Tailings Radiation Control Act, EPA has established standards for control of radioactive contamination to soil, air, and groundwater at certain uranium processing sites. NRC regulations make EPA’s groundwater protection standards generally applicable to uranium milling sites, including ISR operations. States may play additional roles in regulating uranium operations on federal land. In general, states may have their own requirements governing the review of mining plans, environmental performance standards, reclamation, financial assurances, and inspection. For example, many states with uranium deposits require that an operator provide a financial assurance for the full cost of reclamation for a mining site. Memorandums of understanding among the federal and state agencies aim to encourage coordination between states and federal agencies in overseeing mining operations. Federal agencies must also comply with NEPA. NEPA requires federal agencies to analyze the likely environmental effects of proposed projects, which may include uranium mines, using an environmental assessment or, if the projects would likely significantly affect the environment, a more detailed environmental impact statement evaluating the proposed project and alternatives. An environmental impact statement results in a record of decision that lays out how anticipated environmental impacts will be mitigated. BLM, the Forest Service, and DOE all oversee uranium exploration and extraction operations on the federal land they manage, but we identified three areas where their processes differ: (1) notification of exploration or extraction operations, (2) oversight of financial assurances, and (3) royalties and rents earned. BLM, the Forest Service, and DOE require uranium operators to provide notification of their intent to undertake either uranium exploration or extraction activities on federal land, but their notification processes differ slightly. Under regulations for proposed activities on BLM land, “casual use”—generally defined as activities ordinarily resulting in no or negligible disturbance to the public lands or resources—is allowed without any notice. For operations that are greater than casual use but that will disturb 5 acres or less of land, operators are required to file a notice with the local BLM field office 15 days before commencing operations. Under the regulations, BLM has 15 days to review the notice for completeness. To be complete, a notice must contain specified operator information, a sufficient description and schedule of the activity, a reclamation plan, and a reclamation cost estimate, among other information. Once a financial assurance is in place, the operator may begin operations once it hears from BLM that the notice is complete, or if it receives no word from BLM after 15 days. According to BLM guidance, the agency does not approve a notice and therefore is not required to perform an environmental review under NEPA for a notice. Operations that constitute more than notice-level surface disturbance must submit a plan of operations to the local BLM field office for review and approval, according to BLM regulations. A plan of operations must include, among other information, specific operator information, a description and schedule of operations, a reclamation plan, a monitoring plan, and a reclamation cost estimate. BLM will review the plan within 30 days and then inform the operator that the plan is complete, that more information is required, or that additional steps must be completed. Upon completion of BLM’s review of the plan, including analysis under NEPA and public comment, BLM will notify the operator that it approves the plan, approves the plan subject to additional changes or conditions, or that it disapproves or withholds approval of the plan. Since 2001, BLM has been working on a draft handbook to guide its state and local field offices when reviewing notices and plans of operations. In the interim, BLM has issued a series of Instruction Memorandums to its field staff as guidance. Like BLM, the Forest Service requires operators to provide notification of uranium operations, but the Forest Service differs in the activities it will allow under a notice of intent and plan of operations. Under Forest Service regulations, no notice is required for certain activity, such as collection of mineral specimens using hand tools, but a notice of intent is required for operations that might cause significant disturbance of surface resources, and a plan of operations is required for operations that will likely cause such a disturbance, such as use of mechanized equipment like a backhoe. These standards apply regardless of the acreage involved. Forest Service officials told us that district forest rangers take the lead in reviewing and approving notice- and plan-level operations on Forest Service lands. The Forest Service does not perform environmental analysis under NEPA for projects that are not likely to cause significant disturbance, such as under a notice of intent. A NEPA environmental analysis is initiated only for plan-level operations, because they are more likely to cause significant disturbance. DOE’s notification requirements for its lease tracts differ from BLM’s and the Forest Service’s. DOE officials told us that the majority of its requirements for uranium operations are contained in its bid solicitation and in the terms of the lease, which incorporate relevant sections of DOE regulations. DOE notification requirements for exploration and extraction on its lease tracts are not contained in federal regulations. Instead, our review of two DOE lease documents showed that they contained a section specifying that the operator submit an exploration plan before beginning any surface disturbance to explore, test, or prospect for minerals. Furthermore, the leases specify that before developing a mine, a lessee must submit a separate mining plan to DOE for approval. DOE officials told us that because they oversee operations through a lease, they consider their role to be more like that of a landlord than a regulator. Under a DOE-BLM memorandum of understanding executed in April 2010, DOE has sole authority over the selection of lessees and the negotiation, issuance, management, and termination of leases. However, BLM has jurisdictional authority over all other surface and subsurface uses of the lease tracts and will review and provide comments on lessee plans as they relate to compliance with BLM regulations. According to DOE, it assesses specific tracts through the use of an environmental checklist; however, a more detailed environmental assessment may also take place. DOE reviews mining plans for consistency with its 2007 programmatic environmental assessment and existing environmental regulations. Table 1 describes some of the differences in notification requirements among BLM, the Forest Service, and DOE. BLM, the Forest Service, and DOE require operators to have financial assurances in place to cover the full estimated cost of reclaiming areas disturbed by operations; however, the agencies differ in who is responsible for initial calculation of these assurances, how frequently they conduct their review, how the review is documented, and how soon reclamation must begin after operations cease. (See table 2 for a summary of financial assurance requirements for the three agencies.) The full estimated cost to reclaim a site is typically defined as the sum sufficient for a third-party contractor to perform all necessary work, including measures to save topsoil for later reuse, control erosion, recontour the area disturbed, and revegetate or reseed the disturbed land. The estimate may also include agency administrative costs. BLM regulations require operators to reclaim land disturbed by uranium operations. To ensure that this work is performed, since 2001, BLM has required the operator to provide a financial assurance. Operators must develop an estimate of the amount of financial assurance needed, which BLM reviews and adjusts as necessary. BLM does not have a minimum sum for a financial assurance. BLM uses its Bond Review Report to determine if the estimated costs of reclamation are adequate for ongoing operations, to take action to increase or decrease the financial assurance accordingly, and to certify that financial assurances are adequate to cover estimated reclamation costs. The Bond Review Report aggregates data from BLM’s LR2000 database and includes data on the amount of financial assurances and when they were last reviewed. A BLM instruction memorandum directs local field offices to review financial assurances for adequacy every 2 years for notices and every 3 years for plans of operations. In addition, by December 1 of each year, state BLM offices must review the Bond Review Report to determine if reclamation cost estimates for notices and plans of operations within their states are adequate and were reviewed within appropriate time frames. If the Bond Review Report indicates that a financial assurance is not adequate to cover estimated reclamation costs at a site or has not been reviewed within the appropriate time frame, then the state director must develop a corrective action plan to address the deficiencies. Following the end of operations at a site or when a notice expires, BLM regulations require reclamation of a notice to begin promptly, and reclamation of a plan of operations to begin at the earliest feasible time. Because BLM does not have an official definition for these time frames, BLM officials told us that local field offices have flexibility in determining whether operators are in compliance. Before a financial assurance is released back to the operator, the state agency responsible for mine permitting and the BLM local field office will inspect the site to verify that reclamation is complete. In some cases, reclamation can take several years, and a financial assurance may be reduced periodically before being released fully. Because many operations may involve a mix of federal, state, county, and private lands, BLM regulations provide the option of joint bonding with the state. In these cases, the state holds the financial assurance, but it is also redeemable by BLM. The Forest Service also directs operators to provide a financial assurance for the full cost of reclamation. However, in contrast to BLM, the Forest Service relies on its technical staff at the district, forest, or regional level, not the operator, to calculate the estimated reclamation costs. It uses formal agency guidance issued in 2004 to calculate the estimated reclamation costs and proposes the amount of the financial assurance to cover those costs to the operator. The Forest Service does not have a required minimum for financial assurances on its lands. According to Forest Service guidance, an operator’s financial assurances should be reviewed annually for adequacy, but a Forest Service official told us that agency staff do not prepare an annual report documenting these reviews. Forest Service regulations require that site reclamation begin upon exhaustion of the mineral deposit, at the earliest practicable time during operations, or within 1 year of the conclusion of operations, unless a longer time is allowed by the Forest Service. Forest Service and state officials will inspect a site to ensure that reclamation is complete before releasing the financial assurance. A financial assurance may also be released in increments as reclamation progresses. In most cases, the Forest Service holds the financial assurances for mining operations on its land, although a Forest Service official told us that the financial assurance could be jointly held with the state for larger operations. DOE also directs its personnel to ensure that the financial assurance provided by an operator is adequate to cover the estimated cost of reclamation. Sample lease agreements that we reviewed set a minimum financial assurance amount and state that DOE personnel will take into account estimated reclamation costs in setting the financial assurance. Similar to the Forest Service, DOE generally calculates this as the estimated amount for a third-party contractor to perform the reclamation work. The current minimum sum for DOE financial assurances is $5,000, according to DOE officials. Generally, DOE will perform a financial assurance assessment whenever the lessee puts forth new plans for a mining operation. The financial assurance review is filed in the case file as part of the approval package. Upon expiration of the lease, or early relinquishment or cancellation of the lease, current DOE lease terms require lessees to return the site to a condition satisfactory to DOE within 180 days, or a term otherwise agreed to by DOE and the lessee. DOE guidance states that DOE will release the financial assurance once the lessee’s reclamation effort is deemed acceptable. Financial assurances are usually held by DOE, except in cases where disturbance to a DOE lease tract is minimal as part of a larger project undertaken on private or state lands. Under existing statutory authorities, BLM and the Forest Service cannot collect rents for the use of federal land or charge royalties on hardrock minerals, including uranium, extracted from that land. BLM does charge claimants an initial $34 location fee, a $15 processing fee, and an annual $140 maintenance fee per claim, and also collects these fees for claims on Forest Service land. In contrast, under the Atomic Energy Act, DOE may collect royalties and rents for uranium extraction operations on its lease tracts. DOE establishes the royalties and terms of payment with the lessee in the lease; typically potential lessees will offer to pay higher production royalties for lease tracts known to contain higher grades of uranium. DOE has collected approximately $64 million in royalties since the beginning of the lease program in the 1940s. Specifically: From the first round of leasing, 1949 through 1962, the program generated $5.9 million in royalties to the federal government from 1.2 million pounds of uranium and 6.8 million pounds of vanadium. From the second round of leasing, 1974 through 1994, the program generated $53 million in royalties for the federal government from production of approximately 6.5 million pounds of uranium and 33.4 million pounds of vanadium. From the third round of production, 2003 through 2005, the program generated $4.77 million in royalties for the federal government from production of approximately 390,000 pounds of uranium and 1.4 million pounds of vanadium. In addition, current DOE leases require lessees to pay an annual rent. According to the program’s annual status report, five companies collectively paid an annual rent of $387,040 in fiscal year 2010. Each lessee pays an amount according to the size and value of its lease tract. In lieu of paying this rent, DOE also allows lessees to perform reclamation work on previously abandoned mine sites. In fiscal year 2010, three companies negotiated with DOE to perform reclamation work in lieu of paying rent valued at a total of $101,860. As of January 2012, a total of 221 uranium operations were on federally managed land, but only 7 of these operations were actively extracting uranium and these were all on BLM land. An additional 29 uranium operations were awaiting federal approval. Most of the operations—202— were on BLM land; another 3 were on Forest Service land, and the remaining 16 were on DOE lease tracts. Of the 221 uranium operations on federal land, 202, or 91 percent, were on land managed by BLM, according to our analysis of agency data. Of these 202 operations, BLM’s LR2000 database identified 144 as authorized, which means BLM has acknowledged an operator’s notice or has approved its plan of operations and has approved a financial assurance. These 144 operations included 111 notices and 33 plans of operations, covering about 13,400 acres, and were primarily located in Arizona, Colorado, Utah, and Wyoming. The remaining 58 operations on BLM land were expired notices—that is, operations have ceased except for reclamation and the financial assurance is held until BLM determines that reclamation is complete. According to our analysis of LR2000 data, we also identified 28 uranium operations (11 notices and 17 plans of operations) that were awaiting BLM’s authorization. Collectively, these pending operations could involve disturbing up to 24,300 acres of BLM- managed land. We surveyed BLM staff in 25 field offices across eight states for additional information on the status of the uranium operations on BLM-managed land. As shown in table 3, we asked them to provide information on how many operations were in each of eight possible status categories. (For a more detailed description of the status categories that we used in our survey, please see app. I.) Specifically, on the basis of our survey responses, we determined the following: Of the 144 authorized operations, 7 operations are actively extracting uranium—3 mines in Utah, 3 in Wyoming, and 1 in Arizona. In addition, 60 operations are engaged in exploration, 51 operations are engaged in reclamation, and 22 are on standby—that is, they are not actively exploring or extracting uranium. Of the 58 expired operations, 40 are engaged in reclamation, and BLM staff did not know the status for 12 operations, in part because several of these operations had last been inspected in 2002. Most of the remaining 6 are either in standby or closed status. Of the 28 operations identified in LR2000 as pending, field staff reported a status for 12 operations that is inconsistent with BLMs definition of “pending.” For example, staff reported 2 pending operations in exploration status, 4 pending operations in reclamation status, 3 pending operations in standby status, and 3 that were closed. Seventeen operations listed as pending in LR2000 were reported by field staff to be in a status that is consistent with the definition of pending, specifically exploration permitting or extraction permitting. In addition, our review of documents for 110 of these operations confirmed that some of the reported status levels in LR2000 were inaccurate. For example, we found one notice that was denied in March 2007 that was still listed as pending in LR2000 as of January 2012. In another instance, a notice was authorized in October 2011 but was still listed in LR2000 as pending. There were other instances where the documentation that staff provided to us, such as inspection reports, had not been entered into LR2000. BLM guidance requires that field staff update LR2000 within 5 working days of a change in the status of the operation. Such delays in entering information affect the ability of LR2000 to serve as an effective management tool to track operations. According to the standards for internal control in the federal government, agencies are to promptly record transactions and events to maintain their relevance to management in controlling operations and making decisions. Of the 7 operations actively extracting uranium on BLM-managed land, 4 are underground mines and 3 are ISR operations. See table 4 for more information on these operations. BLM officials told us the agency did not have data on how much uranium these operations were extracting because it is not authorized to collect this information on uranium or other hardrock minerals. We identified three uranium operations on land managed by the Forest Service in the Manti La Sal National Forest in Utah. Two of these operations involve uranium exploration, while the third involves the installation of vent holes for the Pandora underground mine, whose entrance is located on BLM-managed land. Collectively, these operations have been authorized to disturb up to 7 acres of land. However, the Forest Service is currently reviewing a plan to authorize the Canyon Mine in the Kaibab National Forest in Arizona. This mine’s plan of operations was initially approved in the mid-1980s and the Forest Service is determining whether additional, more current environmental analysis must be undertaken to authorize this operation. As part of is uranium leasing program, DOE oversees 31 lease tracts, which are in a variety of statuses. Eight tracts have a total of 9 uranium mines on them, all of which are These on standby—that is, they are not actively extracting uranium.lease tracts cover about 6,900 acres, but the operations have disturbed only about 260 acres of land. Seven lease tracts have approved exploration plans, but no exploration work is ongoing. DOE has not approved any exploration or extraction plans for 14 lease tracts. The remaining 2 lease tracts have not been leased out. According to DOE officials, no extraction activity has taken place on its lease tracts since 2006 for two reasons. First, DOE officials reported that there has been limited incentive to explore or extract uranium on their lease tracts because there are no uranium processing mills in Colorado near the lease tracts. Second, in October 2011, a federal district court ordered that no additional surface disturbance could take place on any DOE lease tracts until DOE completes an appropriate environmental analysis pursuant to NEPA.statement is due to be released for public comment in late 2012. As of January 2012, BLM, the Forest Service, and DOE reported $249.1 million in financial assurances, and these assurances appear to be generally adequate to cover the estimated reclamation costs for uranium operations on federal land, according to our analysis of agency data. Agency data indicate that nearly all of these assurances ($247.6 million of the $249.1 million) are for operations that are at least partially on BLM- managed land. Although almost all of these financial assurances were adequate to cover the estimated cost of reclamation, we identified some issues in how BLM oversees these assurances. We also found the value of financial assurances for two ISR operations had increased significantly, but that BLM and NRC did not coordinate their efforts to establish and review financial assurances for these operations. The remaining $1.5 million in financial assurances is for authorized operations on land managed by the Forest Service and for DOE lease tracts. According to our analysis of agency data, these financial assurances are adequate to cover the current estimated cost of reclamation for the operations that the two agencies oversee. As of January 2012, BLM had financial assurances of about $245.5 million for 144 authorized uranium operations, according to our review of BLM’s Bond Review Report, and the financial assurances were adequate for all but 2 of the operations. Specifically, we found 1 operation where BLM field staff reported that the assurance in place was likely inadequate to reclaim an acid pit lake that had formed at an older, inactive open pit uranium mine in Wyoming. The operation has in place a financial assurance in the amount of $126,000, but the operator is in the process of developing a new reclamation estimate for BLM to review. In addition, we found 1 operation for which the financial assurance for a plan of operations in Utah was $16,000 less than the estimated reclamation costs. In general, we found that most of the financial assurances for operations on BLM land are for less than $100,000. During our review of BLM’s data, we identified two issues related to BLM’s Bond Review Report for overseeing financial assurance of uranium operations. First, we found inaccuracies in the information included in the report. Specifically, the Bond Review Report indicated that reviews of the financial assurances for 5 notice-level operations had not taken place in over 36 months, which is a year past the frequency that BLM guidance requires. According to BLM officials, these 5 operations had been reviewed within the correct time frames, but staff had entered an incorrect action code into LR2000. We also found other instances during the course of our review where BLM staff had entered incorrect action codes into this system. LR2000 accepts hundreds of action codes, yet the agency does not have comprehensive guidance on all the action codes that can be used in LR2000. Second, the Bond Review Report does not include financial assurances that are in place for expired operations. According to our review of agency data, there are 58 expired uranium operations on BLM land. One reason BLM officials offered for why the Bond Review Report does not include information on expired operations was because the financial assurances for these operations are smaller. However, the information we reviewed shows that 43 expired uranium operations had about $2 million in financial assurances and that some of these expired operations had assurances that were well above $100,000. In addition, we found the remaining 15 expired operations did not have any financial assurances in place. According to BLM officials, because these 15 operations were established prior to BLM’s 2001 regulations that required financial assurances for all mining operations, it is reasonable that these operations do not have financial assurances. Nonetheless, these 15 operations do need to be reclaimed and, according to BLM staff, these operations may not be receiving the required oversight, which is evidenced by the fact that several of these operations were last inspected about a decade ago. The fact that these 15 operations have not been reclaimed or inspected in almost a decade suggests that oversight of expired operations could be improved. We found that two ISR operations—the Smith Ranch and Highland operations in Wyoming—account for $213 million in financial assurances, or 86 percent of the total financial assurances held for uranium operations on land managed by BLM. According to BLM officials, a portion of the financial assurances for these two operations also covers activities on land that is not managed by BLM, such as state or private land. The required financial assurances for ISR operations on the Smith Ranch and Highland operations have increased from June 2011 through December 2011—from about $80 million to about $120 million for the Smith Ranch, and from about $80 million to about $93 million for Highland, although the size and disturbance of the operations at these two sites has not significantly changed. According to BLM, NRC, and Wyoming state officials, this increase is due to a variety of factors, including new estimates of the additional work necessary to restore the groundwater at these sites. For example, the estimated number of cycles during which this groundwater is extracted and treated before being reinjected—known as a pore volume—has been increased from six to nine. The cost to restore groundwater at these sites has also increased because the operator had previously removed equipment necessary to restore the groundwater so the equipment could be used in other operating wellfields, and this equipment must now be either returned to these sites or replaced with other groundwater restoration equipment, according to NRC officials. In March 2008, the state of Wyoming issued a notice of violation to the operator for Smith Ranch and Highland that stated that the operator was not adhering to the schedule for restoring groundwater and that its estimate of the number of pore volumes and resources needed to restore the groundwater were too low. As a result, the state concluded that the total financial assurances in place at the time for the Smith Ranch and Highland operations—$38.4 million—should be increased immediately to $80 million to protect the public and that a more realistic estimate of the cost to reclaim the sites would be close to a total of $150 million. According to Wyoming state officials we spoke with, this notice of violation was part of the process of requiring greater financial assurances for the Smith Ranch and Highland operations that has resulted in these operations now having a combined $212.7 million in financial assurances. In examining the efforts to increase financial assurances for these two sites, we found that BLM and NRC did not coordinate their efforts with each other. According to Wyoming state officials, BLM field office staff generally provide comments and concurrence on the proposed financial assurances that operators submit annually. In contrast, NRC generally conducts its own independent review of the financial assurances it believes should be in place. In 2009, NRC and BLM enacted a memorandum of understanding intended to improve interagency cooperation in environmental assessments; facilitate the sharing of special expertise and information; and coordinate the preparation of studies, reports, and documents. However, this memorandum does not cover interagency coordination of the review of financial assurances. Even though the financial assurances for the Smith Ranch and the Highland operations have increased significantly, the lack of federal coordination when establishing these financial assurances raises concerns about the adequacy of these financial assurances and the financial assurances associated with any future ISR operations that may be authorized. (For more information on active and pending ISR operations, see app. II.) According to our review, it appears that both BLM and NRC have expertise in different areas of the work needed to reclaim an ISR operation, and better coordination among these agencies would help ensure that all necessary factors have been considered. Specifically, BLM primarily has expertise in estimating the cost of reclaiming surface disturbances at a mining site, and NRC primarily has expertise in estimating the cost of restoring groundwater contaminated by radioactive material. NRC officials reported that some of this expertise was developed through overseeing reclamation activities at uranium processing mills where groundwater must be restored, buildings demolished, and monitoring wells plugged. However, NRC officials acknowledged that the scale of disturbance at an ISR site is much greater than at a mill, because of the thousands of wells that must be plugged and the surrounding surface reclaimed. In addition, restoring the underground water at these mining sites is a complex process because it must be restored to the background concentration, a maximum concentration that incorporates standards set by EPA, or alternate concentration limits as approved by NRC. According to Wyoming state officials we spoke with, enhanced coordination between the federal agencies and also with the state could help to leverage each agency’s particular expertise in reviewing financial assurances for ISR sites. These state officials told us that this coordination is even more important because ISR operators have had little experience with restoring groundwater at ISR wellfields to date in Wyoming. Specifically, at the Smith Ranch and Highland ISR sites, the state and NRC have approved groundwater restoration efforts at only 1 of the 19 wellfields according to Wyoming state and NRC officials. The Forest Service and DOE have financial assurances for uranium operations that are adequate to cover the current estimated cost of reclamation for the sites they oversee, according to our analysis of agency data. Specifically, the Forest Service reported having about $42,000 in financial assurances for the three operations on its land, one of which consists of installing vent holes for a mine on adjacent BLM land, and the other two were for operations currently conducting exploration. The Forest Service handbook requires that all active financial assurances be reviewed annually, and our review found that all had been reviewed within appropriate time frames. DOE reported about $1.5 million in financial assurances for its 29 tracts that have been leased out, with about $1.2 million of this total for a single lease tract with an inactive open pit uranium mine. Our review of DOE data indicates that these assurances were adequate as of the last time they had been reviewed—from 1996 through 2005 for 9 lease tracts and in 2008 or later for the remaining 22 tracts. DOE officials told us they had not reviewed some of these financial assurances more recently because there has been little new activity on the lease tracts in recent years. DOE officials told us that they generally review financial assurances when a lessee makes a change to an exploration or mining plan on a lease tract. Federal agencies do not have reliable data on the number and location of abandoned uranium mine sites on federal lands and the potential cleanup costs associated with these sites, according to our review of agencies’ databases and discussions with agency staff. We found that agency databases generally lack complete data and a common definition of an abandoned mine site, and contain information that has not been verified through field inspections. In addition, federal agencies do not have estimates of the potential total cleanup cost for abandoned uranium mine sites on the land they manage. According to agency officials, the cost to clean up these sites varies according to site-specific conditions, including the amount and type of work required at each site, and the total number of sites needing cleanup. There are likely thousands of abandoned uranium mine sites on federal land where either exploration or extraction may have taken place, but the available federal data on these sites are generally unreliable. In particular, we found the following limitations with these data. Agencies’ databases are incomplete. Three agency databases only partially track the commodity extracted, and one of them omitted sites with incorrect geographic coordinates. For example, according to BLM’s database, there are an estimated 1,189 abandoned uranium mine sites on BLM-managed land. However, these data are based primarily on information from three states (Colorado, Utah, and Wyoming) because the BLM state offices in these states require their local field offices to enter the commodity that had been previously extracted from these abandoned mines. Similarly, in the National Park Service’s abandoned mine database, the commodity field is optional for agency staff to enter. On the other hand, EPA’s database, which estimates that there are 8,124 abandoned uranium mine sites on federal land, does not include some sites because they do not have specific geographic coordinates, according to agency officials. In addition, some of the databases have not been updated in years and do not track the extent to which extraction took place at each site, which would help indicate the type of cleanup work that might be required. For example, the Forest Service database lists an estimated 1,097 abandoned uranium mine sites; however, the status of many of these sites has not been updated since they were first entered in the database in the 1980s. In addition, the Forest Service and EPA databases do not track which abandoned mine sites have already been cleaned up. As a result, it is not possible to determine from the agency data how many sites remain to be cleaned up. Agencies do not have a consistent definition of an abandoned mine site. We found agencies do not share a consistent definition of an abandoned mine site, and even within an agency the definition may not be consistently applied by various field offices or staff. These inconsistencies pose a problem when trying to combine multiple databases or to compare data across multiple agencies. For example, because of a lack of a consistent site definition, EPA officials told us that the agency faced a challenge in trying to combine data from multiple sources in order to provide more complete information on abandoned uranium mine sites. In addition, even within a single agency, staff may use different definitions of an abandoned mine site when entering data into a database. For example, a BLM official told us that field staff may enter each abandoned mine feature, such as a waste rock pile or a mine opening, as a separate site, instead of grouping these features into one entry. According to a 2007 EPA report on its efforts to develop a database on abandoned uranium mine sites, the lack of a consistent definition leads to problems with determining how many sites exist, since even a single agency’s database may contain mines meeting a variety of definitions. In March 2008, we highlighted the lack of a consistent definition for abandoned hardrock mine sites and the way in which this inconsistency contributes to a wide variation in estimates of the number of abandoned mines. At that time, we developed a consistent definition of an abandoned hardrock mine site, and used it to develop a more robust estimate of abandoned mines by applying it across multiple databases. According to EPA officials we interviewed, federal agencies involved with abandoned mines have used a regular interagency forum, called the Federal Mining Dialogue, to discuss the issue of a lack of a common definition of a mine site but have not yet reached agreement on how to address this issue. Agency databases contain sites that have not been verified through field inspections. According to agency officials, field inspection is the best way to determine an abandoned mine’s location and features, such as posing physical safety and environmental hazards, to discover new abandoned mine sites, and to figure out what cleanup may be required at an abandoned mine site. However, field inspections also require more resources because agency staff must try to cover large areas of land, sometimes in risky or inaccessible conditions, such as mountainous or rocky areas. Currently, the National Park Service and BLM are in the process of verifying the condition of abandoned mine sites on their land. According to National Park Service officials, the agency received $3.3 million over 3 years to verify how many abandoned mine sites, including uranium mines, it has on the land it manages, and to verify cleanup needs at these sites, a process the agency hopes to complete by September 30, 2012. On the basis of preliminary results from this field inspection, National Park Service officials told us that of the 46 abandoned uranium mine sites on their land, 25 remain to be cleaned up. Since 2009, some inventory efforts of abandoned mines on BLM land have been under way in Arizona, New Mexico, and Wyoming, but not all BLM offices in these states require their staff to track the commodity that was extracted at abandoned mine sites. Table 5 and appendix III provide more specific information on the limitations of each agency’s database on abandoned uranium mines. BLM, EPA, and Forest Service officials told us that their agencies do not have an accurate number of abandoned mine sites and their location because no laws or regulations require the agencies to track abandoned mines and that the agencies do not have sufficient resources to collect this information. Specifically, officials from BLM and EPA explained that any tracking of sites is done voluntarily to help with their mission. In addition, BLM and Forest Service officials told us that they have not had sufficient funds to conduct field inspection verification on all their known abandoned mine sites on the lands they manage and that to do so would be costly, requiring additional financial and staff resources. At current funding levels, according to a May 2011 draft feasibility study, it will take BLM 13 years and $39 million to finish inspecting all known abandoned mine sites on its land, including the ongoing inventory work in Arizona, New Mexico, and Wyoming. In addition to not knowing how many abandoned uranium mines are on federal land, BLM, the Forest Service, EPA, and the National Park Service do not have information on the total cost of cleaning up abandoned uranium mines. Officials noted that cleanup costs are determined not only by the total number of mines that need cleanup, but also by site-specific conditions, including the amount and type of work required at each site. Agency officials explained that each abandoned mine site has distinctive characteristics and requires a unique cleanup plan based on, among other things, its size, accessibility, the need for heavy equipment, and the level of contamination. Agency officials we spoke with generally agreed that cleanup costs at individual sites could range from several thousand dollars to hundreds of millions of dollars. These officials also agreed that most of the work is likely to fall within one of the following three cleanup categories: addressing safety hazards, conducting surface reclamation, and conducting environmental remediation. However, officials cautioned that sometimes cleanup at a site requires work across two or all of these categories. Figure 3 illustrates some of the activities that can take place in these cleanup categories. The agencies also provided us with examples of costs that have been incurred at 18 abandoned uranium mine sites. Table 6 provides a range of costs associated with cleanup efforts depending on the type of work conducted at each site. It is important to note that these cost ranges are not exhaustive and that some cleanup costs for other abandoned uranium mine sites could fall outside these cost ranges. Some examples of the factors that can contribute to the variability in the costs for cleanup at abandoned uranium mine sites include the following. Number of safety hazards that need to be addressed: BLM and National Park Service officials told us that most of the work they have conducted to date on abandoned uranium mines is designed to mitigate safety hazards. Costs for this type of work have ranged from $1,800 to close 2 mine openings in Arches National Park in Utah to $33,000 to backfill 11 mine openings with waste rock at the Canyonlands National Park in Utah. A BLM official cautioned that future costs to address sites with physical safety hazards can be higher because BLM has generally addressed safety hazards that are the least costly to clean up because of limited available funding. Extent to which surface reclamation needs to be conducted: The primary purpose of activities under this category is to return the land to as near its previous appearance as possible through recontouring and revegetating disturbed land. According to DOE documents, the costs to reclaim the surface ranged from about $2,500 for closing 2 mine openings, recontouring 70 cubic yards of dirt, and revegetating 1 acre of disturbed land at the Nine Mile Hill Mines on BLM-managed land in Colorado to nearly $98,000 for more extensive reclamation work at the Hawk Mine Complex on lands managed by BLM in Colorado. The work at this site primarily focused on the installation of multiple gates over mine openings, backfilling 500 cubic yards of surface pits with waste materials, recontouring 6,800 cubic yards of waste rock materials from 8 waste rock piles, and revegetating 4 acres of disturbed area. Extent to which environmental remediation must be undertaken: Most of the activities in this category are designed to mitigate significant environmental hazards. Officials from BLM, the Forest Service, National Park Service, DOE, and EPA told us that few abandoned uranium mine sites have undergone remediation, but cited two instances in which this work has occurred or is ongoing and proved to be costly and the costs varied significantly. For example, according to our review of agency documents, the Pryor Mountain Mine, located on land managed by the Forest Service in Montana, cost about $200,000 to clean up, and involved environmental remediation to remove contaminated soil and waste rock that posed a human health risk. The site, located close to an Indian reservation and near hiking trails and campsites, initially presented levels of radioactive contamination that were up to 369 times higher than normal background levels. At another site—the 320-acre open pit Midnite Mine site in Washington state—costs are estimated to be as high at $193 million by the time remediation is complete, according to EPA documents. Most of this cost is for treating acid rock drainage in two large open pits that contain millions of gallons of water and then filling these pits with 33 million tons of waste materials. Some mine sites that require environmental remediation also require long-term— defined as longer than 5 years—maintenance and monitoring, especially if contaminated water requires treatment. For example, one of the largest costs (approximately $32 million) associated with environmental remediation at the Midnite Mine site is for monitoring and treating surface and underground water. EPA estimates that this water will need to be treated in perpetuity. Additional information on these and other abandoned uranium mine sites is presented in appendix IV. Having adequate financial assurances to pay for reclamation costs for federal land disturbed by uranium operations is critical to ensuring that the land is returned to its original state if operators fail to complete the reclamation as required. BLM, the Forest Service, DOE, and NRC play key roles in establishing and reviewing these financial assurances for uranium operations on federal land. We found that nearly all of the uranium operations on federal land had adequate financial assurances, according to our analysis of agency data. However, we found some limitations in agencies’ oversight of uranium operations’ financial assurances, which raise some concerns about these financial assurances. In particular, ISR operations account for a large proportion of financial assurances in place for uranium operations on federal land and have recently been increasing for some operations, yet there is little coordination between BLM and NRC when establishing and reviewing these assurances. This lack of coordination raises concerns about the adequacy of the financial assurances in place for existing ISR operations and for those ISR operations that are awaiting approval. Both BLM and NRC have specific expertise in assessing certain aspects of the reclamation activities that are required at ISR sites, but have no process in place to share this information and leverage their expertise. Without such coordination, the agencies cannot be confident that the assurances they establish for ISR operations will be adequate to cover the costs of reclamation. BLM relies on its LR2000 database and Bond Review Report to provide information that supports its oversight of financial assurances. However, data entered into LR2000 are sometimes inaccurate and not always updated in a timely manner in keeping with BLM’s requirements. Moreover, the Bond Review Report does not examine expired operations, yet we found that some of these operations have large financial assurances in place or have not been inspected in 10 years. Without complete, timely, and accurate information in LR2000 and the Bond Review Report, the usefulness of these management tools to BLM may be diminished and may limit effective oversight of uranium operations. Finally, identifying the number, location, and cost of cleanup of abandoned mines is a challenging task for federal agencies. However, this process has been made more difficult because the agencies have not been able to reach agreement on a consistent definition for what constitutes an abandoned mine site. Without a consistent definition, data collection efforts are hampered and agency databases cannot be combined to provide a more complete picture of abandoned mines on federal land. To help better ensure that financial assurances are adequate for uranium mining operations on federal land, we are recommending the following three actions.  The Secretary of the Interior and the Chairman of the Nuclear Regulatory Commission should enhance their coordination on financial assurances for ISR operations through the development of a memorandum of understanding that defines roles and promotes information sharing.  The Secretary of the Interior should direct the Director of the Bureau of Land Management to take the following actions to improve oversight of financial assurances: include information on expired mine operations in the annual Bond Review Report process, and  develop guidance to ensure accurate and prompt data entry in LR2000. To enhance data collection efforts on abandoned mines, we recommend that the Secretaries of the Interior and of Agriculture and the Administrator of the Environmental Protection Agency work to develop a consistent definition of abandoned mine sites for use in data-gathering efforts. We provided a draft of this report to the Department of Agriculture, the Department of Energy, the Department of the Interior, the Environmental Protection Agency, and the Nuclear Regulatory Commission for review and comment. All of these agencies concurred with our recommendations. In particular, NRC recognized that development of a memorandum of understanding on financial assurance reviews could be beneficial to NRC and BLM, and plans to pursue such an agreement with BLM. NRC noted that development of a memorandum of understanding that adequately addresses both agencies’ regulatory oversight may be challenging and stated that the agency may pursue other, less formal methods of coordination with BLM if a memorandum of understanding cannot be developed. In addition, DOE stated that a national database for uranium mining activities would be useful, and the agency agreed there is a need for federal agencies with uranium mines on their land to have common definitions and to use these definitions when gathering information that could be used to determine reclamation needs. Similarly, EPA agreed that a consistent definition of abandoned mine sites would be useful, and will work with other relevant agencies to develop a definition, if possible. Furthermore, EPA commented that our report lacked specificity with regard to our use of the terms “reclamation” and “remediation.” We have modified our report to include more specific definitions of each of these terms and clarified what each of these terms means in the context of the report. EPA and the Department of the Interior also provided us with technical comments, which we have incorporated as appropriate. See appendixes V, VI, VII, VIII, and IX for agency comment letters from the Department of Agriculture, DOE, the Department of the Interior, EPA, and NRC, respectively. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Energy, the Secretary of the Interior, the Environmental Protection Agency Administrator, the Chairman of the Nuclear Regulatory Commission, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact us at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. Our objectives were to (1) compare Bureau of Land Management (BLM), Forest Service, and Department of Energy (DOE) oversight of uranium exploration and extraction operations on federal land; (2) determine the number and status of uranium operations on federal land; (3) examine the coverage and amounts of financial assurances in place for reclaiming current uranium operations on federal land; and (4) examine what is known about the number and location of abandoned uranium mines on federal land and their potential cleanup costs. To compare BLM, Forest Service, and DOE oversight of uranium exploration and extraction on federal land, we reviewed federal laws, regulations, and guidance, as well as prior GAO reports and other studies on hardrock mining operations. We also spoke with BLM, Forest Service, and DOE officials in headquarters and field offices, and BLM state offices in Arizona, Colorado, New Mexico, Utah, and Wyoming—five states with large uranium deposits. We also reviewed DOE lease contracts. To understand the interagency relationship among BLM, the Forest Service, and DOE, as well as the these agencies’ relationship with the states, we reviewed memorandums of understanding among these parties. We also spoke with state representatives of mining and environmental agencies in Arizona, Colorado, New Mexico, Texas, Utah, and Wyoming to discuss how they coordinate with federal agencies while reviewing uranium operations and their financial assurances. We discussed relevant issues for hardrock operations and financial assurances with representatives from the mining industry, state geological services, and an environmental group. We also examined relevant regulations from EPA and NRC and spoke with officials from these agencies. To determine the number and status of uranium operations on federal land, we gathered information from BLM, the Forest Service, and DOE. To identify uranium operations on BLM land, we requested that BLM provide an extract from its LR2000 database for operations—both notices and plans of operations—that were in an authorized, expired, or pending status and listed “uranium” or “uranium and other minerals” as the commodity that was being targeted. To determine the reliability of these data, we spoke with a BLM information technology official responsible for administering the system; BLM state and field office staff who enter information into the system; and BLM managers at the agency’s Washington, D.C., headquarters office who use information from the system. We also reviewed database documentation, and we determined the LR2000 data were sufficiently reliable for our purposes. We used these data to administer a web-based survey to BLM field staff responsible for overseeing uranium operations in 25 field offices across eight states—Arizona, Colorado, Nevada, New Mexico, Oregon, South Dakota, Utah, and Wyoming. We asked these staff to provide the status of these operations based on the most recent information available using the following eight status levels and definitions, which we developed in consultation with BLM staff: exploration permitting (e.g., operator is in the process of obtaining permits to conduct exploration at the site), exploration (e.g., operator is preparing the site for exploration or conducting exploration work at the site; concurrent reclamation may also be taking place), extraction permitting (e.g., operator is in the process of obtaining permits to extract uranium at the site), extraction (e.g., operator is preparing the site for extraction or actively extracting uranium at the site; concurrent reclamation may also be taking place), standby (e.g. operator is authorized to explore or extract, but is not doing so), reclamation (e.g., reclamation is taking place at the site following the end of exploration or extraction activities), closed (e.g., reclamation is complete and financial assurance has been released), and other. As part of this survey, we asked BLM staff to provide copies of the documentation they consulted when determining the status of the operation, such as inspection reports or correspondence with operators, and we used these documents to verify the reported status. For field offices overseeing a large number of operations, we requested they provide documents for 10 operations they oversaw, which we selected randomly. We also asked BLM staff if there had been any uranium extracted at the operation in the last 5 years. Prior to sending out this survey, we pretested it with officials from 3 BLM field offices and revised some of the survey questions based on their input. We received responses to our survey from all 25 field offices, and we sent follow-up questions based on their survey responses to clarify certain responses or to ask for additional information. Because the Forest Service and DOE oversee fewer uranium operations than BLM, we did not use our survey to collect information on the status of these operations; instead, we gathered this information through interviews with agency officials and agency documents. The Forest Service compiled information on its uranium operations by contacting Forest Service officials who were located in National Forests where uranium operations are located. The Forest Service also provided documentation on these operations that we used to verify the information it provided. DOE provided information on its lease tracts that it maintains as part of its program. We used DOE’s annual status report on its lease tracts to help to verify the reported status levels along with conversations with DOE officials. For both the Forest Service and DOE, we used interviews with officials along with relevant documentation to determine the reliability of these data, and we determined they were sufficiently reliable for our purposes. To examine the financial assurances in place for uranium mining on BLM land, we reviewed information in BLM’s Bond Review Report, which aggregates data on financial assurances from BLM’s LR2000 database, including the required amount of the financial assurance for an operation, the amount of the financial assurance in place, and when it was last reviewed. As part of this analysis, we examined whether the financial assurances in place were adequate to cover the estimated costs of reclamation; we did not determine whether the estimated costs for reclamation were sound because that was outside the scope of our review. Since the Bond Review Report relies on LR2000 data, we used our data reliability assessment of LR2000 detailed above to help determine whether the data in the report were reliable. In addition, we obtained a copy of the specifications that were used to create the Bond Review Report and examined the report to identify outliers in the data or incomplete fields and used BLM documents or discussions with BLM staff to clarify any issues we identified. We determined that BLM’s financial assurance data in its Bond Review Report were sufficiently reliable for the purposes of our review. Because BLM’s Bond Review Report contains only information on authorized operations, we gathered information on financial assurances from LR2000 for the expired operations. To examine the financial assurances in place for uranium operations on Forest Service land and DOE’s lease tracts, we examined data provided by these agencies. Specifically, we compared the financial assurance amounts that were required with the amounts that were in place. As we did for our analysis of BLM’s data, we examined whether the financial assurances in place were adequate to cover the estimated costs of reclamation; we did not determine whether the estimated costs for reclamation were sound because that was outside the scope of our review. To determine the reliability of the data from the Forest Service and DOE, we interviewed agency staff who gathered these data, and we used supporting documentation to corroborate the information that was reported. We determined that these data were sufficiently reliable for our purposes. To learn about the number and location of abandoned uranium mines on federal land, we reviewed data from BLM, the Forest Service, EPA, the National Park Service, and DOE, which are all involved in efforts to track and clean up abandoned uranium mines. We received and analyzed data from databases these agencies maintain on abandoned uranium mines. We also reviewed pertinent documents that accompanied some of these databases and other agency documentation, such as studies or reports that describe the status of abandoned uranium mines on lands managed or leased by these agencies. We conducted two sets of semistructured interviews with officials in charge of abandoned mine programs at all of these agencies—before and after we reviewed the data and documentation—to gather more information about these databases, including identifying limitations and determining the reliability of the data in the databases. We also conducted interviews with officials from the U.S. Geological Survey, which maintains the data used by the Forest Service. We also interviewed staff from BLM field offices and state agencies in the states where most uranium deposits are located to get more information on the number and location of abandoned uranium mines and to hear their perspectives on the federal databases. As a result of our efforts, we determined that these data were not sufficiently reliable to establish a definite number of abandoned uranium mines. However, because these were the only federal data available, we have used them in the report only to discuss in general terms the number of potential abandoned uranium mine sites that may exist on federal lands, and we have described the limitations associated with these data. To describe the potential cleanup costs posed by abandoned uranium mines, we reviewed relevant literature and conducted semistructured interviews with officials from the federal agencies in charge of abandoned mines. On the basis of this information, we identified three distinct cleanup categories that we and agency officials believe are most representative of the types of actions that take place at abandoned uranium mine sites. In developing these categories, we consulted with officials from all five agencies in charge of cleaning up abandoned uranium mine sites, and they agreed with our approach and our categories. These categories are not mutually exclusive, and cleanup work at a site could fall within multiple categories, especially at larger or more contaminated sites. These cleanup categories included actions taken to address safety hazards, which means that most cleanup activities at the site are intended to mitigate safety hazards; conduct surface reclamation, which means that most cleanup activities at the site are intended to return the land to its appearance before mining activities took place; and conduct environmental remediation, which means that most cleanup activities at the site are intended to deal with removing land and water contamination that poses a threat to the environment and human health. These activities can also include long-term—defined as longer than 5 years—maintenance and monitoring. We also asked officials from these five agencies to provide us with examples that are illustrative of the range of costs associated with performing such cleanup work. We asked for examples of sites that have already been cleaned up and have definitive costs, or information on sites that have detailed cost estimates. We received 18 examples from the agencies, which are divided equally across the three cleanup categories. Fourteen examples are for past work and contain actual cleanup costs; 4 examples, all in the environmental remediation category, are for work that is still to be completed and are based on estimated costs. For better comparison purposes, we reported these cost numbers in 2011 dollars. For each example, we asked for and received documentation that describes in detail the work performed at each site. For the sites that have not been cleaned up yet, we received pertinent documentation, such as records of decision or consent decrees. To get a better understanding of uranium mining in general, we conducted site visits to Colorado and Wyoming to examine uranium operations. We visited these states because they have a variety of uranium operations involving several federal agencies. In Colorado, we spoke with BLM, DOE, and state officials involved in overseeing uranium operations. We also spoke with representatives of a uranium company and toured some uranium operations including some underground mines that were on standby on land managed by BLM and a few abandoned mine sites. In addition, we toured two DOE lease tracts and examined reclamation work that had been performed on these tracts. In Wyoming, we met with BLM and state officials involved in overseeing uranium operations and spoke with representatives of some uranium companies. In addition, we toured an in situ recovery operation and examined the various components of this operation. We conducted this performance audit from June 2011 through May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information on in situ recovery (ISR) operations on land managed by BLM. Some of these operations are not entirely on federal land, but rather include state and private land. The Forest Service and Department of Energy officials reported that they do not have any ISR operations on land they manage. This appendix provides information on federal databases that contain information on abandoned uranium mines, and the limitations that we identified for each database. This appendix provides information on cleanup activities at 18 abandoned uranium mine sites. Fourteen sites have been cleaned up and have actual cleanup costs, while 4 examples provided by agencies are based on estimates and not on actual cleanup costs. In addition to the individual named above, Andrea Brown and Elizabeth Erdmann (Assistant Directors), Antoinette Capaccio, Julia Coulter, Maria Gaona, Scott Heacock, Cristian Ion, Rebecca Shea, Carol Herrnstadt Shulman, and Jena Sinkfield made key contributions to this report.
From 2005 through 2007, uranium prices increased from about $20 a pound to over $140 a pound, leading to renewed interest in uranium mining on federal land. This interest has raised concerns about the potential impacts that more uranium operations could have on the environment. GAO was asked to (1) compare key agencies’ oversight of uranium exploration and extraction operations on federal land, (2) determine the number and status of uranium operations on federal land, (3) identify the coverage and amounts of financial assurances for reclaiming current uranium operations on federal land, and (4) examine what is known about the number and location of abandoned uranium mine sites on federal land and their potential cleanup costs. GAO reviewed agency reports and regulations, surveyed relevant agency field staff on the status of these operations, and examined federal data on uranium operations, financial assurances, and abandoned uranium mine sites. The Bureau of Land Management (BLM), the Forest Service, and the Department of Energy (DOE) are the key agencies that oversee uranium exploration and extraction on federal land, but GAO identified three areas where their oversight processes differ. First, these agencies have different processes for notification of uranium exploration or extraction activities on federal land. Second, the agencies require operators to have in place financial assurances to cover the full estimated cost of reclaiming a uranium operation, but they differ in who estimates the value of the financial assurance and the frequency of their reviews of the assurances. Third, under existing authorities, DOE can collect royalties or rents for uranium extraction, but BLM and the Forest Service cannot. DOE has collected about $64 million in rents and royalties from its leasing program since the 1940s. As of January 2012, a total of 221 uranium operations were on federally managed land, but only 7 were actively extracting uranium and all of these were on BLM land. An additional 29 uranium operations were awaiting federal approval. Of the 202 operations on BLM land, the majority were engaged in either reclamation or exploration activities, according to BLM field officials. In addition, 3 uranium operations were on Forest Service land, and 16 operations were on lease tracts that DOE manages, none of which were actively extracting uranium. As of January 2012, BLM, the Forest Service, and DOE reported having $249.1 million in financial assurances, and these assurances were generally adequate to cover the estimated reclamation costs for uranium operations on federal land. Nearly all of these assurances ($247.6 million) were for authorized uranium operations on BLM-managed land, with the remaining $1.5 million for authorized operations on Forest Service land and for DOE’s lease tracts. BLM and the Nuclear Regulatory Commission (NRC), which is responsible for overseeing some aspects of uranium operations on federal land, do not coordinate efforts to establish and review financial assurances for in situ recovery operations, which use a series of wells to extract uranium. Such operations account for a large percentage of the total financial assurances held by the agencies. Federal agencies do not have reliable data on the number and location of abandoned uranium mine sites on federal land or a definitive cost for their cleanup. There are likely thousands of abandoned uranium mine sites on federal land, but GAO identified significant limitations in agencies’ data that make their databases generally unreliable. For example, these databases do not have complete data and do not use a consistent definition of an abandoned mine site. Agencies do not know how many sites will need cleanup, and they do not have information on the total cost to clean up these sites. Based on agencies’ experiences with cleanup at some sites, cleanup costs could vary significantly from thousands to hundreds of millions of dollars, depending on site-specific conditions and the amount and type of work required at each site. GAO recommends, among other things, that federal agencies better coordinate their efforts when establishing financial assurances and develop a consistent definition for abandoned mine sites. The Departments of the Interior, Agriculture, and Energy, along with NRC and the Environmental Protection Agency (EPA), concurred with these recommendations. In addition, Interior and EPA provided technical comments, which GAO incorporated as appropriate.
AOC and the CVC team have continued to refine the project’s schedule since the November hearing and have made substantive progress in addressing the issues that we and the Subcommittee have raised, particularly concerning the base project’s schedule. For example, the CVC team reviewed the sequence and duration of the activities scheduled for interior stonework, finish work, and work associated with the base project’s fire protection system, including the acceptance testing to be done by AOC’s Fire Marshal Division. To reflect the results of its review, the team revised the project’s December 2005 and January 2006 schedules, and in collaboration with the team that is planning for CVC operations, enhanced the manner in which the operations activities are incorporated into the project’s master schedule. AOC and its contractors’ staff who are involved in planning for CVC operations agree that the January 2006 schedule identifies the related construction and operations activities. The CVC team has not yet fully reassessed the schedule for the expansion spaces and has not yet reached agreement with the Chief Fire Marshal on the requirements for acceptance testing of those spaces. Finally, the CVC team has continued to meet weekly to identify risks facing the project and to discuss mitigation strategies and actions. As of February 1, 2006, the team had identified 62 risks and developed mitigation strategies for all but 1, which had just been identified. The plans vary in their level of detail and stage of implementation. According to AOC’s December 2005 and January 2006 schedules, the CVC base project will be ready to open to the public with a temporary certificate of occupancy on February 13, 2007, and the House and Senate expansion spaces will be ready on April 24, 2007. To allow for possible delays and start-up time for operations, AOC has proposed an April 2007 opening date for the base project and a May 2007 occupancy date for the expansion spaces. By the April opening date for the base project, AOC believes, all construction work in the CVC and East Front will be completed, but the CVC’s occupancy at any one time will be temporarily limited to 3,500, compared with about 4,200, the normal anticipated occupancy level. This temporary limit will be necessary because the “horizontal exits,” or passages, through the expansion spaces, which the life safety code requires for exiting the base CVC project, will not be available until later. These horizontal exits cannot be used until the fire alarm system in the expansion spaces has been fully tested and accepted—work that is not slated to be completed until after the base CVC is scheduled to open. Some additional work will likely be required to provide temporary emergency exit routes from the CVC, but the CVC team does not believe that this work or its costs should be substantial. Mr. Chairman, a brief explanation of AOC’s rationale for proposing a CVC opening with a temporary cap on visitor occupancy may be helpful at this point. The current project schedule calls for completing the construction of both the CVC and the expansion spaces before December 31, 2006, but would delay the start of acceptance testing the portions of the fire alarm system in the expansion spaces until such testing for the base CVC project is completed in February 2007. AOC is planning this approach because it believes that starting the acceptance testing for the expansion spaces earlier would prolong the completion of the acceptance testing in the base project and thereby delay the base project’s opening to the public. More specifically, the fire protection devices for the atriums, which are a part of the horizontal exits ultimately required by code for full occupancy of the base project, would undergo acceptance testing with the expansion spaces, rather than with the base CVC project. To accommodate this change, AOC shifted the finish work in the atriums from the base CVC schedule to the expansion space schedule, and is planning to conduct the acceptance testing for the atriums and the expansion spaces at the same time, after the acceptance testing for the base CVC project is done. Until the acceptance testing for the expansion spaces has been completed, AOC’s Chief Fire Marshal has said that the expansion spaces, including the exits through the atriums, cannot be used as emergency exit routes, and therefore AOC must take measures to provide temporary emergency exit routes from the base CVC project and reduce the number of occupants who can be in the base project until the exit routes are available. Our work to date in monitoring the CVC project and the results of our recently completed risk assessment of the project’s schedule point to later opening dates than the schedule indicates. Although the schedule for the base project goes a long way toward responding to our concerns about the amount of time previously provided for a number of activities and extends their duration, CVC team managers and members we interviewed believe that certain work will take longer to complete than the revised schedule allows. For example, they believe that interior stonework and finish work for the base project and the East Front are likely to take longer. According to our risk analysis, which reflects the CVC team’s input and assumes that AOC will successfully address the challenges it faces, the CVC is more likely to be ready for opening with a temporary certificate of occupancy between late April and mid-May 2007 than in February, as indicated in AOC’s current schedule. AOC is now proposing an April 2007 opening date to provide time for possible construction slippages and operations preparation. The additional time AOC says is necessary for operations preparation after construction completion would mean that the CVC would be ready for opening with a temporary cap on visitor occupancy by about the end of May 2007, according to our analysis. Similarly, our analysis suggests that the House and Senate expansion spaces are more likely to be ready in mid August or early September 2007 than in April or May 2007. We believe the later time frames are more likely because (1) AOC has scheduled the acceptance testing of the expansion spaces after the acceptance testing of the base project and, according to our work, the base project testing will take longer than scheduled and (2) AOC’s Chief Fire Marshal believes that the acceptance testing of the expansion spaces will take longer than scheduled. We have discussed the results of our analysis with AOC, and it continues to believe that it will be able to meet its April and May 2007 time frames for the CVC and the expansion spaces, respectively. Furthermore, AOC said that it and the CVC team will continuously review the schedule to identify opportunities for improvement. For example, AOC pointed out that it may be able to have the acceptance testing of the expansion spaces done in segments so that Members and staff will not have to wait for the entire facility to be tested before they can occupy their space. AOC also believes it may be able to revise the scheduling of some East Front mechanical work to save time. We agree that AOC should continuously look for ways to improve the schedule and that improvements may be possible. However, we also believe that AOC will be challenged to meet even the later opening dates we have identified given the problems, challenges, risks, and uncertainties the project faces. A discussion of these follows: Delivery of stone and pace of stone installation remain critical. Although the CVC team has made progress in installing interior wall and floor stone, work on the wall stone has fallen behind schedule in several areas, and the project still faces significant challenges, risks, and uncertainties in this area. These include whether sufficient quantities of the appropriate wall stone will be received in time and whether the pace of installation will be sufficient to complete this work as scheduled. According to information provided by the sequence 2 contractor on February 10, the wall stone supplier still had a 20-truckload backlog and was not shipping wall stone at the scheduled rate, resulting in a delivery shortfall of about 6,000 cubic feet. According to AOC’s construction management contractor, stone supply is not affecting interior wall stone installation because a large quantity of stone is currently on site; however, the contractor is concerned about the ability of the stone supplier to meet current and future requirements that include stone for the East Front, adequate stone to maintain productivity, and the 20-truckload backlog. The pace of installation is also an issue. The sequence 2 contractor has recently increased the number of stone masons working on the project and has begun meeting the installation targets in its work plan. However, if the wall stone installation targets are not achieved, whether because the masons are less productive than planned or work spaces are not ready for stonework to begin, completion delays are likely. The sequence 2 contractor has already encountered work spaces in the service level, the orientation lobby, and the East Front that were not available for stonework because concrete was out of tolerance or masonry walls were not ready for wall stone to be hung. Finally, the sequence 2 contractor still needs to install about 120,000 square feet of floor stone in the CVC and could have problems meeting the scheduled completion dates if not enough masons are available, the amount of floor space available is insufficient because other finish work is not done, or other trades are working in the areas where floor stone is to be laid. As of February 10, AOC had not received a floor stone installation plan requested from the sequence 2 contractor, but the sequence 2 contractor said that it intends to finish the plan soon. Stacking of trades could delay completion. Continued delays, particularly in wall stone installation, could adversely affect the sequence 2 contractor’s ability to accomplish all of the required finish work on schedule. The sequence 2 contractor has been making progress relative to its current plan for installing wall stone in the auditorium and the orientation lobby, but according to the current project schedule, wall stone installation is delayed in other areas, such as the East Front, the great hall, and the orientation theaters’ exterior walls. Furthermore, as of February 10, although the contractor had completed 10 of the 13 milestones relating to wall stone that are being tracked for the Subcommittee, none of the 10 was completed by the date set in the September 2005 baseline schedule, and only 4 were completed by the date set in the November 2005 schedule. (See app. I.) If delays continue, a stacking of trades such as we described at the Subcommittee’s November hearing could hold up finish work, such as drywall or ceiling installation, electrical and plumbing work, plastering, or floor stone installation. Such a situation could also increase the risk of accidents and injuries. The CVC team has also identified “trade stacking” as a high risk. The sequence 2 contractor acknowledges the risk, but said that it has structured its schedule to avoid the risk and plans to monitor progress closely to avoid problems. We acknowledge that these steps can be helpful; however, the more the wall stone schedule slips, the greater is the likelihood of “trade stacking,” since more and more work will have to be done in less time to meet the schedule. AOC’s construction management contractor agrees that this is a serious potential problem. Complex building systems remain a significant risk. The CVC will contain complex building systems, including systems for heating, air conditioning, and ventilation; fire protection; and security. These systems not only have to perform well individually, but their operation has to be integrated. If the CVC team encounters any significant problems with their functioning, either individually or together, during commissioning or testing, the project could be seriously delayed. AOC and the CVC team are aware of these risks and have been taking steps to mitigate them as part of their risk management process. Yet despite these steps, a significant problem could arise during commissioning or testing, and it is important that the team be prepared for such an event. Building design continues to evolve. The CVC has undergone a number of design changes, and design changes are continuing for a number of building components, such as the exhibit gallery and the fire protection and security systems. Some of these changes have resulted in delays, such as in the exhibit gallery and in the East Front. In addition, designs or shop drawings for some elements of the project, such as aspects of the facility’s fire protection systems, have not yet been fully approved and are subject to change. At this stage of the project’s construction, one might expect the number of design changes to dwindle. However, this is not the case. For example, more than 20 design changes or clarifications were issued last month. Additional design changes are being considered, and the potential exists for such changes to further adversely affect the schedule. Multiple critical activity paths complicate schedule management. In its report on the project’s January 2006 schedule, AOC’s construction management contractor identified 18 critical activity paths—4 more than in the contractor’s report on the project’s October 2005 schedule—that are crucial to meeting the scheduled completion date. In addition, the construction management contractor said that several noncritical activities have fallen behind schedule since November 2005, and a number of these have moved closer to becoming critical to the project’s completion. As we have previously said, having a large number of critical and near-critical activities complicates project management and increases the risk of missing completion dates. We believe that the CVC team will be particularly challenged to manage all of these areas concurrently and to deal effectively with problems that could arise within these areas, especially if multiple problems arise at the same time. We currently estimate that the total cost to complete the entire CVC project is about $555 million without an allowance for risks and uncertainties and could be as much as about $584 million with such an allowance. As table 1 indicates, our current estimate without an allowance for risks and uncertainties is about $12 million higher than the estimate without such an allowance that we presented at the Subcommittee’s November 16, 2005, hearing. This $12 million increase is largely attributable to additional delay costs estimated by AOC’s construction management contractor and actual and anticipated changes in the design and scope of the project. In particular, changes in the project’s fire protection system, which we discussed at the Subcommittee’s October 18, 2005, CVC hearing, are now expected to cost more than previously estimated. Specifically, the system’s acceptance testing is expected to be more extensive and to take place later than originally anticipated, and additional temporary construction may be required to ensure fire safety if the CVC is opened to the public before the Senate and House expansion spaces are completed. This additional construction would involve designing and installing—and then removing— temporary walls and perhaps taking other fire protection measures to create emergency exits from the CVC. As discussed in more detail earlier in this statement, the need for temporary construction may be reduced or eliminated if the fire safety acceptance testing of the expansion spaces and of the CVC can be performed concurrently, rather than over two separate periods, as would be likely if the CVC is opened to the public before the expansion spaces are completed. We discussed this issue during the Subcommittee’s July 14, 2005, CVC hearing and recommended then that AOC estimate the cost of these temporary measures so that Congress could weigh the costs and benefits of opening the CVC before the expansion spaces are completed. AOC has agreed to provide this estimate to Congress when it has more information on the status of construction progress on the CVC and expansion spaces and the specific steps that will be necessary to provide adequate temporary exit routes. We now estimate that the total cost to complete the entire project with an allowance for risks and uncertainties could be as much as $584 million, or about $25 million more than we estimated in November 2005. This increase reflects the potential for the project to incur additional costs if difficulties arise in commissioning and testing its complex and sophisticated fire protection, ventilation, and security systems; significant problems with the building’s design are identified and need to be corrected during construction; delays cost more than anticipated; and significant discretionary changes in the project’s design and scope are requested. To date, about $528 million has been provided for CVC construction. This amount does not include about $7.7 million that was made available for either CVC construction or operations. According to AOC, it expects to use about $2 million of this amount for construction. To obtain the additional funding that it expected to need to complete the project’s construction, AOC, in December 2005, requested $20.6 million as part of its budget request for fiscal year 2007. This request was based, in part, on discussions with us and took into account our November 16, 2005, estimate of the cost to complete the project’s construction without an allowance for risks and uncertainties and funding from existing appropriations. The request also reflected updates to our November estimate through mid-December 2005. At that time, the $20.6 million request for additional appropriations, coupled with the additional funds that AOC planned to use from existing appropriations, would have been sufficient to cover the estimated cost to complete construction without an allowance for risks and uncertainties. Our work since mid-December 2005 indicates that AOC will need about $5 million more, or about $25.6 million in additional funds, to complete construction without an allowance for risks and uncertainties. This increase reflects the number and magnitude of potential change orders that CVC team members and we believe are likely and additional costs associated with extending the project’s expected completion date beyond March 31, 2007, the date contemplated in our last cost estimate. AOC generally agrees with our estimate, particularly with respect to having sufficient contingency funds available for necessary design or scope changes or for additional delay-related costs. Public Law 108-83 limits to $10 million the amount of federal funds that can be obligated or expended for the construction of the tunnel connecting the CVC with the Library of Congress. As of February 14, 2006, AOC estimated that the tunnel’s construction would cost about $9.8 million, and AOC’s total obligations for the Library of Congress tunnel construction work totaled about $8.7 million. AOC’s remaining estimated costs are for potential changes. On February 13, 2006, AOC awarded a contract for the work to connect the tunnel to the Jefferson Building. This work is costing more than AOC had estimated—a possibility we raised in our November 16 testimony before the Subcommittee. Because this work involves creating an opening in the building’s foundation and changing the existing structure, we believe that AOC is likely to encounter unforeseen conditions that could further increase its costs. Therefore, we included additional contingency funds for this work in our $555 million estimate of the cost to complete the CVC project’s construction. Both AOC and we plan to monitor the remaining tunnel and Jefferson Building construction work closely to ensure that the statutory spending limit is not exceeded. Mr. Chairman, in conclusion, AOC has responded to many of the schedule- related concerns we have identified, but its planned opening date for the CVC is still somewhat optimistic. For AOC to meet even our estimated opening time frame, we believe that it is critically important for the CVC team to do the following: Aggressively take all necessary and appropriate actions to install interior wall and floor stone as expeditiously as possible, including seeing that sufficient quantities of masons, stone, and work space are available when needed to meet the wall stonework plan and the forthcoming floor stone installation plan. Closely monitor construction to identify potential “trade stacking” and promptly take steps to prevent it or effectively address it should it occur. Reassess its risk mitigation plans to ensure that the team takes the steps necessary to prevent a major building system problem during commissioning or testing and has measures in place to deal quickly with problems should they arise. Carefully consider the necessity of proposed scope and design changes and attempt to minimize the impact of necessary changes on the project’s schedule and cost. Reassess the capacity of the CVC team (AOC and its contractors) to effectively manage and coordinate the schedule and work from this point forward, particularly with respect to the large number of activities that are currently critical, or close to being critical, to the project’s timely completion. Identify and consider the pros and cons (including the estimated costs) of opening the CVC and expansion spaces at about the same time and provide this information to Congress. We have discussed these actions with AOC, and it generally agrees with them. It pointed out that it would be in a better position to assess the pros and cons of opening the CVC and the expansion spaces concurrently when construction is further along and it becomes clearer when the work will actually be done. This appears reasonable to us. We would be pleased to answer any questions that you or Members of the Subcommittee may have. For further information about this testimony, please contact Bernard Ungar at (202) 512-4232 or Terrell Dorn at (202) 512-6923. Other key contributors to this testimony include Shirley Abel, John Craig, Maria Edelstein, Elizabeth Eisenstadt, Brett Fallavollita, Jeanette Franzel, Jackie Hamilton, Bradley James, and Scott Riback. Wall Stone Area 2 base Wall Stone Area 3 base Wall Stone Area 1 Pedestals Wall Stone Area 2 Pedestals Install Walls Sta. 1+00- 2+00 Install Roof Sta. 1+00- 2+00 Install Roof Sta. 0+00-1+00 12/07/05 This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO testified before the Senate Subcommittee on the Legislative Branch, Committee on Appropriations to provide the results of a risk-based analysis of schedule and cost for the Capitol Visitor Center (CVC). Our remarks focused on (1) our assessment of the risks associated with the Architect of the Capitol's (AOC) December 2005 schedule, and our estimate of a time frame for opening the project to the public; and (2) the project's costs and funding, including the potential impact of scheduling issues that have arisen since the Subcommittee's November 16, 2005, hearing on the CVC project's schedule and cost. Since the Subcommittee's November 16 CVC hearing, AOC and the CVC team have moved the project's construction forward and significantly revised the schedule, particularly for the base project. For example, they have reached agreement with AOC's Chief Fire Marshal on the schedule for testing the base project's life safety systems and have enhanced the manner in which the project's operations schedule is incorporated into the project's master schedule. In addition, they have reviewed and revised the schedule, postponing the opening dates for the CVC and the House and Senate expansion spaces by about 2 months each. Under AOC's revised schedule, the CVC would be open to the public in February 2007 with a temporary cap on visitor occupancy, and the expansion spaces would be open in April 2007. However, to allow for possible delays and start-up time for operations, AOC is proposing to open the CVC in April 2007 and the expansion spaces in May 2007, at which time the temporary cap on CVC occupancy would be lifted. We concur with AOC about the need for postponing the opening dates, but do not believe that AOC has scheduled enough time to complete several of the project's critical tasks and to address the problems, challenges, risks, and uncertainties that AOC and the CVC team are attempting to address. If they are successful in addressing these issues, we believe that the CVC can be opened to the public with the temporary cap on visitor occupancy in May 2007 and that the expansion spaces can be opened beginning in mid-August to early September 2007. Congress may be able to begin occupying the expansion spaces earlier if AOC implements a phased opening plan it is considering. However, if AOC experiences major problems completing construction, such as with installing interior stone or testing major building systems, the work could be finished even later than we have estimated. According to our current estimate, the total estimated cost to complete the entire CVC project is about $555 million without an allowance for risks and uncertainties. This estimate exceeds our November 16, 2005, estimate by about $12 million because we and AOC's construction management contractor are now projecting further delay-related costs. Changes in the project's design and scope have also been occurring, and more are likely. For example, the project's fire protection system has been evolving, and the system is now expected to cost more than previously estimated. To date, about $528 million has been provided for CVC construction. Thus, we now estimate that another $25.6 million will be needed to complete construction without an allowance for risks and uncertainties and taking into account funding from existing appropriations that AOC is planning to use. With an allowance for risks and uncertainties, we now estimate that the project could cost as much as about $584 million at completion, or about $25 million more than we estimated in November 2005. Estimated costs for the tunnel connecting the CVC with the Library of Congress are still within, but are now approaching, the $10 million statutorily mandated limit.
Over the last several years, the Congress has expressed concern over the number and costs of disaster declarations. GAO has also identified the cost of disaster assistance as one of FEMA’s major management challenges. Figure 1 depicts the number of major disasters declared since fiscal year 1991 and FEMA’s share of the Public Assistance program costs for projects associated with them. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (the Stafford Act), as amended, establishes the process for states to request a presidential disaster declaration. The Stafford Act requires that “requests for a declaration by the President that a major disaster or emergency exists shall be made by the Governor of the affected state.” As part of the request to the President, a governor must affirm that the state’s emergency plan has been implemented and the situation is of “such severity and magnitude that effective response is beyond the capabilities of the State and the affected local governments and that Federal assistance is necessary.” Before a governor asks for disaster assistance, federal, state, and local officials normally conduct a joint preliminary damage assessment. FEMA is responsible for recommending to the President whether to declare a disaster and trigger the availability of funds as provided for in the Stafford Act. FEMA uses the damage assessment data in preparing its recommendation to the President. When an obviously severe or catastrophic event occurs, a disaster may be declared before the preliminary damage assessment is completed. In response to a governor’s request, the President may declare that a major disaster or emergency exists. This declaration activates numerous assistance programs from FEMA and may also trigger programs operated by other federal agencies, such as the Departments of Agriculture and Labor, the Federal Highway Administration, the Small Business Administration, and the U.S. Army Corps of Engineers, to assist a state in its response and recovery efforts. The federal disaster assistance provided under a major disaster declaration has no dollar limit. FEMA provides assistance through one or more of the following grant programs: Public Assistance provides aid to state government agencies; local governments; Indian tribes, authorized tribal organizations, and Alaskan Native villages; and private nonprofit organizations or institutions that provide certain services otherwise performed by a government agency. Assistance is provided for projects such as debris removal, emergency protective measures to preserve life and property, and repair and replacement of damaged structures, such as buildings, utilities, roads and bridges, recreational facilities, and water-control facilities (e.g., dikes and levees). Individual Assistance provides for the necessary expenses and serious needs of disaster victims that cannot be met through insurance or low- interest Small Business Administration loans. FEMA provides temporary housing assistance to individuals whose homes are unlivable because of a disaster. Other available services include crisis counseling to help relieve any grieving, stress, or mental health problems caused or aggravated by the disaster or its aftermath. FEMA provides unemployment compensation and can cover a percentage of the medical, dental, and funeral expenses that are incurred as a result of a disaster. The Hazard Mitigation Program provides additional funding (currently up to 15 percent of total federal aid for recovery from the disaster) to states to assist communities in implementing long-term measures to help reduce the potential risk of future damages to facilities. Figure 2 shows the obligations for each of these three general programs for fiscal years 1991 through 2000. As this figure indicates, the Public Assistance program is the largest of the three grant categories, in terms of dollars expended. Not all programs are activated for every disaster. The determination to activate a program is based on the needs identified during the joint preliminary damage assessment. For instance, some declarations may provide only Individual Assistance grants and others only Public Assistance grants. Hazard Mitigation grants, on the other hand, are available for most declarations. In addition to its central role in recommending to the President whether to declare a disaster, FEMA has primary responsibility for coordinating the federal response when a disaster is declared. Typically, this response consists of providing grants to assist state and local governments and certain private nonprofit organizations to alleviate the damage resulting from such disasters. Once a federal disaster is declared, FEMA usually establishes a field office at or near the disaster site. This office is generally staffed with a crew of permanent, full-time FEMA employees; a cadre of temporary reserve staff, also referred to as disaster assistance employees; and the state’s emergency management personnel. Damage estimates for each project, known as project worksheets, can be prepared either by FEMA staff or by personnel from applicants, such as state agencies, communities and certain nonprofit organizations. Full-time FEMA staff then review these project worksheets for final approval. To facilitate their review, approval, and funding, projects are divided into two groups. Projects are considered small if their estimated cost does not exceed $50,600. If a FEMA employee or state representative prepares a worksheet for a small project and it passes all appropriate reviews, it is funded according to its estimated costs. However, if an applicant prepares a project worksheet, FEMA or state officials may verify the accuracy of the claims by validating the project’s cost and eligibility. Typically, officials validate a sample of an applicant’s small projects before approving the funding for them. Large disaster projects, whose estimated costs exceed $50,600, are funded incrementally as work on each phase is completed. In all cases, the states, as the grantees, are responsible for disbursing FEMA funds to the applicants and for certifying that all costs were appropriate and that work on the project was completed in accordance with the approved project estimates. The Stafford Act sets the federal share for the Public Assistance program at no less than 75 percent of eligible costs of a disaster. The President can increase the federal share for the Public Assistance program if it is determined that the disaster costs greatly exceed a state’s financial capabilities. The federal share can sometimes reach 100 percent for emergency work, for limited periods, if it is deemed necessary to prevent further damage, protect human lives, or both. FEMA officials indicated they are reluctant to recommend a 100-percent federal share for projects because this percentage provides no incentives for the states to control costs. To better use disaster resources and devolve major management responsibility for the Public Assistance program to the states, the Director of FEMA implemented a pilot project in 2000 to allow those states that have the capability to do so to manage the Public Assistance segment of their own small disasters. Under this Public Assistance pilot project, the states and affected communities make all project eligibility determinations and ensure that all disaster projects comply with current codes and standards, as well as with federal laws, regulations, and FEMA policies. To participate in this pilot project, a state must, in FEMA’s view, be capable of managing its own disaster recovery program, have a sound financial accounting system to track disaster projects, and enter into an operational agreement with FEMA that defines its roles and responsibilities. In 1999, FEMA published its criteria for evaluating a governor’s request for a disaster declaration. For the Public Assistance program, FEMA identified two specific financial thresholds, as well as several other less specific criteria, such as severe local impact, previous actions taken that helped mitigate the disaster damages, and the overall impact of multiple recent disasters in the state. Any or all of these, as well as “other relevant information,” can be used to determine whether a disaster declaration should be recommended under the Public Assistance program. FEMA’s explicit financial thresholds are less accurate measures of a state’s true ability to respond effectively to a disaster than other available financial measures. In responding to congressional interest in more objective disaster declaration criteria, FEMA set two financial thresholds to be considered in evaluating when a disaster has exceeded a state’s capacity to respond without federal assistance. First, it estimates the per- capita impact of the disaster damages in the state through the preliminary damage assessment process. In 1999, FEMA set a figure of $1.00 per capita as a critical threshold that might warrant federal assistance. This figure, which is currently $1.04, is adjusted annually for inflation. Secondly, it set a $1 million threshold for statewide Public Assistance damages. This threshold, however, is not adjusted for inflation. In 1986, FEMA proposed the $1.00-per-capita threshold as a means of gauging state fiscal capacity. The measure was based on the 1983 per- capita personal income nationwide, then estimated at $11,667. FEMA thought it reasonable “that a State would be capable of providing $1.00 for each resident of that State to cover the costs of State efforts to alleviate the damage which results from a disaster situation,” inasmuch as this amount was roughly equivalent to 0.1 percent of estimated General Fund expenditures by states. FEMA proposed to adjust this figure each year by the ratio of each state’s personal income to the nationwide average, thus making it more sensitive to interstate differences over time. This proposal met with opposition from state and local officials and resulted in a provision in the Stafford Act that prohibited denying federal disaster assistance “solely by virtue of an arithmetic formula or sliding scale based on income or population.” Nevertheless, the unadjusted $1.00-per-capita threshold continued to be used informally as part of FEMA’s preliminary damage assessment efforts. In 1998, FEMA submitted a concept paper for consideration by state emergency managers. In the paper, the agency recommended that the per- capita threshold be set at $1.51. This figure accounted for inflation since 1986, but was no longer linked to average state tax expenditures as the 1986 threshold had been. In response to comments from state emergency management officials, FEMA used the $1.00-per-capita threshold when it published its formal criteria in 1999. It further provided that adjustments based on annual inflation be made and applied uniformly to states. A state’s capacity to respond to a disaster using state resources depends on several factors, the most important of which is perhaps the underlying strength of the state’s tax base and whether that base is expanding or in decline. A state’s tax base represents the resource base against which it can draw to fund its public services needs, including the necessary repairs that arise in the wake of a disaster. An expanding economy also provides more potential revenues than one that is flat or in decline. A readily available indicator of states’ funding capacities and one commonly used in many formula grant programs is state per-capita personal income. Per-capita income provides a quantitative measure of income received by state residents. As such it provides a reasonable starting point for gauging a state’s capacity to bear the burden of making the necessary repairs in the aftermath of a major disaster. Per-capita personal income is commonly used in federal grant programs as a basis for sharing program costs between states and the federal government. Better measures of a state’s fiscal capacity, however, exist. Per-capita personal income, while providing a reasonable indication of state funding capacity, has a number of defects as well. In the past, we have found per- capita income to be a relatively poor indicator of a state’s fiscal capacity because it does not comprehensively measure income potentially subject to state taxation. For example, it does not include income produced in a state unless it is received as income by a state resident. Thus, profits retained by corporations for business investment, though potentially subject to state taxation, are not included in a state per-capita income measure because they do not represent income received by state residents. We have previously reported that Total Taxable Resources (TTR), a measure developed by the U. S. Department of the Treasury, is a better measure of state funding capacity in that it provides a more comprehensive measure of the resources that are potentially subject to state taxation. For example, TTR includes much of the business income that does not become part of the income flow to state residents, undistributed corporate profits, and rents and interest payments made by businesses to out-of-state stock owners. This more comprehensive indicator of state funding capacity is currently used to target federal aid to low-capacity states under the Substance Abuse and Mental Health Service Administration’s block grant programs. In the case of FEMA’s Public Assistance program, adjustments for TTR in setting the threshold for a disaster declaration would result in a more realistic estimate of a state’s ability to respond to a disaster. If, instead of setting a uniform $1.00-per- capita threshold, FEMA had set the average threshold at $1.00 but allowed it to vary according to state TTR estimates, the Tennessee threshold, for example, would have been $0.88 in 1998, while Washington State’s would have been $1.05 per capita. TTR also has the advantage of providing a more sensitive adjustment for growth over time in a state’s fiscal capacity than does adjustment for inflation based on personal income. For example, TTR in the United States is estimated to have grown from $4.4 trillion in 1986 to $9.9 trillion in 1998. If the $1.00-per-capita threshold proposed in 1986 had been adjusted at this rate in 1998, the financial threshold would have been $2.24 per capita (rather than the $1.51 inflation-adjusted figure proposed by FEMA and the $1.04 threshold currently in effect). Furthermore, since TTR provides estimates of each state’s fiscal capacity, adjustments for TTR growth would vary by state. For example, if adjusted for the TTR growth rates from 1986 to 1998, the financial threshold for disaster declarations in Tennessee would be $2.35 per capita, and in Washington State would be $2.65 per capita. We believe that implementing TTR would not be a violation of the statutory prohibition against basing aid solely on an arithmetic formula or sliding scale based on income or population. Rather, TTR provides a more refined measure of a state’s capacity to respond to a disaster than FEMA’s existing $1.04-per-capita measure. It is our expectation that FEMA would continue to take into account several criteria in deciding, for any given incident, whether to recommend a disaster declaration to the President. When FEMA published its declaration criteria in 1999, it maintained that the $1.00-per-capita measure was not a violation of the statutory prohibition because the agency examines all the other listed criteria when it decides whether to recommend a disaster declaration. We agree and believe that the use of TTR or some other more sensitive measure in place of the per-capita measure, together with other criteria such as those identified in the regulations, would be consistent with the statute. In its 1999 regulations, FEMA established a second quantitative measure of a state’s ability to respond to disasters. The agency set a $1 million statewide damage criterion as an indicator that disaster damages might require federal assistance. As a rationale for setting this threshold, FEMA cited its “belief that we can reasonably expect even the lowest population states to cover this level of public assistance damage.” In effect, this criterion set the per-capita threshold at greater than $1.00 for those seven states with populations under 1 million. FEMA also made no provisions to adjust this threshold for future inflation. In its 1999 regulations, FEMA identified five other factors that it considers in evaluating a governor’s request for a Public Assistance disaster declaration: Localized impacts. FEMA considers the extent to which damages are concentrated heavily at the county or local government level even if the statewide per-capita criterion is not met. According to FEMA, this consideration is particularly relevant where critical facilities are involved or where localized per-capita damages might be “extremely high.” FEMA offers no specific threshold for localized per-capita impact but remarks that the agency has seen damages “in the tens or even hundreds of dollars per capita” in situations where the statewide per-capita threshold was not met. Insurance coverage in force. FEMA reduces the grant by the amount of insurance coverage that “is in force or should have been in force as required by law or regulation” when the disaster occurred. As discussed in appendix I, insurance coverage for Public Assistance grants is currently only a postdisaster condition of receiving a grant; that is, if grant recipients do not currently have insurance, they must agree to procure it as a condition of receiving federal assistance. FEMA is now attempting to define a minimum level of insurance coverage that would be reasonable to require public entities to maintain in order to be eligible for public assistance. The issue is still under review. Hazard mitigation. FEMA attempts to encourage mitigation efforts to avert or reduce damages from future disasters by explicitly considering previous mitigation efforts that may have reduced the damages from the current disaster. FEMA suggests that a state that has made such efforts in the past is more likely to receive disaster assistance when the estimated Public Assistance damages fall below the per-capita criterion. Recent multiple disasters. In evaluating a governor’s request for assistance, FEMA also considers the cumulative impact that disasters in the previous 12 months may have had on the state’s or locality’s ability to respond effectively. FEMA includes both Stafford Act and state-declared disasters and the extent to which the state has spent its own funds. Other federal assistance programs. FEMA also considers other federal sources of disaster relief that might more appropriately meet the needs created by the disaster. Disaster relief in various forms is also available under the programs of a number of federal agencies, including the Federal Highway Administration, the Department of Agriculture, and the Small Business Administration. A disaster declaration can be recommended on the basis of any of these criteria, as well as of “other relevant,” but unspecified, information. We analyzed the 79 major presidential disaster declarations that were issued in the 2 years since FEMA published its revised regulations and in which Public Assistance grants were made available. Sixty of those disaster declarations—about 76 percent—met the statewide per-capita dollar thresholds for a presidential disaster declaration; in all but four cases the damages were estimated to be greater than $1 million.However, some disasters were declared when the per-capita damage was substantially less than $1. In one case, FEMA cited significant localized impact as the reason for recommending a disaster declaration when the preliminary estimates indicated a statewide per-capita cost of $0.17. In another case, the state had incurred damages amounting to $0.12 per capita, but the declaration was based on the occurrence of several earlier disasters. These declarations may well have been justified in the circumstances peculiar to these disasters and consistent with existing regulations. However, they illustrate the latitude afforded FEMA by its subjective and nonspecific criteria for determining whether an effective response to a disaster is beyond the capabilities of the state and the affected local governments. Our finding that 76 percent of disasters declared in 1999 and 2000 met the $1.00 per-capita criterion may reflect some improvement over the earlier findings of FEMA’s Inspector General. (See app. I.) In 1999 the Inspector General reported that only 60 percent of disasters from 1988 through 1998 met this criterion. This change may be attributable to FEMA’s 1999 publication of formal declaration criteria. However, because disasters are extremely variable in their occurrence and severity and our available sample was restricted to a 2-year period, such attribution may be premature. Our May 1996 report on FEMA’s Public Assistance program identified many weaknesses and made several recommendations to strengthen the program, especially its processes for determining project eligibility. In addition, FEMA’s Office of Inspector General noted weaknesses in FEMA’s ability to establish project eligibility. Those reports also noted that, in addition to having accurate, useful, and readily available policies and procedures, FEMA employees—especially temporary employees—should receive training in the appropriate application of the latest policies and information systems. To address these problems, FEMA redesigned the Public Assistance program and implemented the changes in October 1998. As part of this redesign, FEMA developed and disseminated numerous regulations, policies, procedures, user manuals, and guides. FEMA also developed a new training curriculum for its permanent and temporary staff. While there is evidence that staff regularly use the new policies and procedures, some eligibility problems persist. These may in part be due to the lack of a formal credentialing mechanism to ensure that staff authorized to review and approve Public Assistance projects or obligate federal funds have received adequate training. The recent emphasis on devolving the management of small disasters from FEMA to the states increases the importance of FEMA’s processes and controls over disaster projects to help ensure that they meet eligibility criteria and federal funds are spent efficiently and effectively. Our 1996 report noted the need for clearer eligibility criteria to improve the accuracy and consistency of eligibility determinations for individual projects once a disaster has been declared. It stated that “FEMA officials may have to make subjective judgments because the criteria lack specificity and/or concrete examples.” For example, officials at FEMA’s regional offices noted problems in “determining the standards (building codes) that are applicable to repair/restoration work,” a process that affects decisions on whether a facility should be repaired or replaced. Our report also stated that the criteria had not been systematically updated and disseminated and that some decisions were unofficial and unwritten. We concluded that clearer criteria were essential because FEMA relies on temporary personnel with limited training to prepare its project worksheets. Furthermore, as the magnitude of disaster damage increased, it was more likely that FEMA would have to call on additional, possibly less thoroughly trained, temporary employees, who were likely to be less familiar with the eligibility rules and regulations. When FEMA redesigned its Public Assistance program, it addressed the identified shortcomings by revising or developing its program guidance, which included policies, standard operating procedures, handbooks, guides, digests, and fact sheets. FEMA has developed or revised Public Assistance policies in 35 areas or topics since the program’s redesign in 1998. The new and revised publications were distributed to FEMA’s regional offices to make them available to the personnel staffing disaster field offices. In addition, FEMA placed the documents on its Web site for easy access. These publications include (1) an easy-to-read summary of program policies; (2) a guide describing the provision and application of procedures for program grants and an index of relevant portions of pertinent regulations and legislation; (3) an applicant handbook containing questions and answers on how to apply for a program grant; and (4) a guide for planning, mobilizing, and controlling large-scale debris clearance and disposal operations. To document its business processes and ensure that all personnel are familiar with its current doctrine, FEMA has continually reviewed and, as necessary, revised its standard operating procedures. FEMA’s Web site lists these procedures and, in some cases, provides details on them. For example, at the time of our review, the Web site contained procedures on (1) the roles and responsibilities of a Public Assistance Coordinator, (2) how to conduct a kickoff meeting, (3) the process for project formulation, (4) the procedures used to validate small projects, (5) immediate needs funding, and (6) how to use the cost-estimating format for large projects. According to the FEMA staff we contacted, those tools are used and viewed as useful in every field office. Several regional managers said they had noticed an increased reliance on this guidance and a corresponding decrease in the tendency of employees to “shoot from the hip” when deciding on a project’s eligibility under the Public Assistance program. While FEMA has taken actions to address the issues identified in our 1996 report, FEMA officials believe that congressional direction would be needed for the agency to change two policies our 1996 report questioned. These include eliminating the eligibility for (1) revenue-generating nonprofit organizations, (2) facilities not actively used to deliver government services, (3) postdisaster beach renourishment, as well as increasing the damage threshold for replacing a facility. Despite the efforts that FEMA has made to improve its criteria, eligibility problems persist. FEMA’s Office of Inspector General audits a sample of disaster assistance recipients each year. We reviewed the 281 audits conducted during fiscal years 1998 through 2000 that involved Public Assistance grants. These audits found 226 cases of ineligible or questionable claims. In nearly half of these cases, the Inspector General found that FEMA had paid duplicate claims for reimbursement for disaster projects or claims for reimbursement for projects that should have been funded by another agency. For example, some of the costs for disaster projects were found to be already covered by a private or government insurance policy or the costs were covered under programs managed by other federal agencies, such as the Federal Highway Administration. The persistence of these problems may in part be due to uneven staff training, the use of a nonformal process to review proposed projects, the inadequate or untimely review of completed projects, and the use of a management information system that makes reviews of programwide effectiveness difficult. FEMA recognizes the need to ensure that its employees, particularly the temporary reserve staff in its disaster field offices, receive training in the appropriate application of the latest policies and information systems. To meet this need, the agency designed a credentialing program with minimum standards for the disaster personnel who make program and cost eligibility decisions that obligate federal funds for disaster projects. However, it has not implemented the program. In addition, according to FEMA officials, the agency does not have a single system that maintains up-to-date information on the training and work experiences of its disaster staff. In fiscal year 1999, FEMA developed a comprehensive credentialing plan that provided a framework for evaluating the knowledge, skills, and abilities of its staff—including its permanent full-time employees as well as its temporary Disaster Assistance Employees—who are deployed during a disaster. FEMA expected that this plan would ensure that its employees would have the basic qualifications to perform their jobs and would make Public Assistance managers, applicants, and the public more confident about their performance in the field. According to FEMA officials, although the credentialing program was formulated, it has not been implemented because of budget constraints and programmatic issues that need to be resolved, such as the number of job proficiency levels within job titles. FEMA offers training for its Public Assistance staff at its Emergency Management Institute in Emmitsburg, Maryland. FEMA also offers to conduct training at a field office at the start of its disaster response effort. The Public Assistance budget for training has decreased from about $1.9 million for fiscal year 1999 to $725,000 for fiscal year 2001. In our review of several FEMA internal studies of the operations of individual disaster field offices during 1999 and 2000, we noted that field office training either was not timely or was not offered at all. Because the majority of disaster personnel are temporary reserve staff, providing training at a field office is the only viable means to train them. According to FEMA officials, the agency currently does not have a single system that maintains up-to-date information on the training and work experiences of its disaster staff. For example, according to available data on formal training, only 20 percent of the staff have received training on NEMIS—the management information system staff are expected to use to document disaster projects—and only one region had over half of its staff trained to use the system. Agency officials told us that this measure does not capture the informal training that disaster staff receive in briefings, refresher courses, and condensed courses while at the disaster field office. Nevertheless, without implementing a comprehensive credentialing plan that tracks the training and experience of its employees, FEMA cannot ensure that all of its disaster personnel are appropriately prepared to make project eligibility determinations. FEMA has not established a formal process for reviewing project worksheets to ensure that special considerations—such as environmental or historic issues, insurance coverage, or flood control—are addressed before the worksheets are approved. Although the agency has established procedures for applicants or FEMA staff to prepare the worksheets, it has left the review process up to the judgment of the FEMA staff in charge. According to a FEMA official, the agency has not formalized the review process because it wants to avoid a time-consuming sequence of reviews and fund projects as quickly as possible. We agree that eligible projects should be funded quickly, but some controls are necessary to ensure that proposed projects meet FEMA’s eligibility criteria and their associated costs are reasonable. During our review of project worksheets for a disaster in Nevada, we found that most of those for flood control projects had not been reviewed by a specialist on contract from the Army Corps of Engineers for that purpose. As a result, FEMA had no assurance that the proposed projects should be funded by FEMA instead of being referred to the U.S. Army Corps of Engineers. FEMA’s efforts to encourage more applicants to prepare their own project worksheets increase the importance of a systematic review process to ensure that proposed projects meet the agency’s criteria for eligibility and cost reasonableness before federal funds are obligated. We found that small disaster projects do not always receive appropriate and timely validation of their estimated costs and that large projects are frequently not certified upon completion. Over 83 percent of Public Assistance projects are considered to be small projects and have been funded solely on the basis of their initial cost estimates. That funding is fixed, regardless of the final cost the applicant actually incurs. FEMA reserves the right to validate 20 percent of an applicant’s small projects to ensure that all costs are eligible and reasonable. In our file reviews, however, we found little evidence of small project validations by FEMA staff. FEMA relies on the states to review completed large projects (those exceeding $50,600) to certify that the applicant has completed the proposed work. FEMA reviews the project after its completion and may adjust the dollar amount of the grant to reflect the actual cost of the eligible work. We found, however, that states, because of their limited staff, often have large backlogs of projects awaiting final review. According to staff at 9 of FEMA’s 10 regional offices, about 50 percent of the state emergency management offices do not regularly submit their required quarterly reports on the certification of large projects. Most of them report that their heavy workload and/or lack of resources preclude them from certifying the completion of large projects promptly. As a result, FEMA’s review is delayed and the agency cannot ensure that the funds already expended on uncertified projects were reasonable and in compliance with applicable regulations and policies. While FEMA’s primary information system—NEMIS—helps the agency manage projects during a disaster, opportunities exist to further develop the system as a management of the Public Assistance program. To help FEMA management achieve its program performance goals, NEMIS should have sound internal management controls. However, we found instances in which the system’s activity, application and quality controls are limited. Although NEMIS collects and can provide information project by project, it provides only limited data for effective programwide analyses. In addition, the system does not automatically verify certain information that has been entered, and it can be unreliable, time-consuming, and difficult to use in a remote disaster environment, according to FEMA officials. Project data may be lost or not entered as a result. Finally, FEMA’s reliance on temporary staff who may lack experience with the system or training in its use threatens the quality of the information it contains. NEMIS is an agencywide system of hardware, software, telecommunications, and applications. It is designed to provide a new technology base to FEMA and its partners to carry out emergency management efforts. Its purpose is to support disaster staff in the field and to maximize the distribution of project worksheet information to Public Assistance grant applicants and regional office staff. According to FEMA officials, NEMIS allows concurrent and remote reviews of project worksheets that are developed in the field, thus improving their timeliness and quality. It also provides a single source of all project information that is useful for any necessary subsequent review. However, while NEMIS can provide information on a project-by-project basis, it is severely limited in its ability to provide higher-level information that will help FEMA management to review the agency’s performance against measures and indicators for the Public Assistance program. FEMA officials informed us that field staff must enter all modifications to project worksheets-–including changes to project cost estimates—by entering them into a narrative field. As a result, it would be difficult for FEMA management to perform automated analyses of summary information in order to track the programwide costs of project modifications or assess the impact of revised Public Assistance policies. In addition, while the Public Assistance function of NEMIS has been upgraded since the system was implemented in August 1998, system problems still cause delays and inaccuracies in entering project information. Our review of FEMA’s internal evaluations of disaster field office operations found many complaints from federal and state disaster personnel that the system is difficult to use or often is not working at all. As a result, data are not entered promptly or may not be entered at all. The Public Assistance portion of NEMIS also lacks common verification processes. For example, dates of key activities and reviews can be entered incorrectly because the system lacks automated error-checking processes to validate entries. Finally, because many field office staff are not trained to use NEMIS, information for the same disaster could be inconsistently entered from site to site and person to person. The potential for inconsistency impedes FEMA’s ability to have an accurate overview of its Public Assistance processes, performance, and field staff’s efforts. Insufficient staff training could also lead staff to spend more time using the system than would otherwise be the case and thus decrease their productivity. The criteria FEMA uses for determining whether to recommend a presidential disaster declaration give the agency great flexibility to respond promptly to a wide variety of natural disasters. However, they are not necessarily indicative of state or local capability to respond effectively to a disaster without federal assistance. For this reason, we recommend that the Director of FEMA develop more objective and specific criteria to assess the capabilities of state and local governments to respond to a disaster. Specifically, the Director should consider replacing the per-capita measure of state capability with a more sensitive measure, such as a state’s total taxable resources. The Director should further consider whether a more sensitive measure would eliminate the need for a statewide $1 million threshold. At a minimum, the Director should consider adjusting the threshold for inflation and providing a more detailed rationale for whatever threshold is chosen. While FEMA has clarified its criteria for individual project eligibility, it still experiences problems with the application of the criteria. Given the magnitude of the funds involved, we recommend that the Director of FEMA do the following: Develop internal control processes for ensuring appropriate reviews of disaster project worksheets—especially when specialists’ reviews are required—to ensure that proposed projects meet eligibility requirements before receiving final approval and funding. Reconsider budgetary priorities to determine if a higher priority should be assigned to implementing a credentialing and training program for federal disaster staff that focuses on the knowledge, skills, and abilities needed for each of the various roles involved in disaster management. Establish a plan to identify recurring problems identified by internal and external audits and take appropriate actions to minimize their recurrence. We provided FEMA with a draft of this report for its review and comment. FEMA found our observations about the disaster declaration process timely and valuable for its review of disaster declaration criteria. FEMA also commented that its current procedures were designed to ensure that the eligibility of proposed projects is appropriately reviewed and validated. While we acknowledge that this is the intent of these procedures, our review found areas where the procedures did not always accomplish their intent. In response to our concerns that all disaster staff may not receive appropriate preparation for making project eligibility determinations, FEMA stated that all disaster staff have attended its basic training class, which provides such instruction. We believe, however, that FEMA should consider giving higher priority to implementing a credentialing program such as the one the agency has designed. The program would establish both training and experience requirements appropriate for federal disaster staff in all job positions. Finally, FEMA responded to our assessment of the availability of FEMA eligibility review processes and procedures, stating that formal approval procedures exist for projects with special considerations. Our analysis found, however, that the requirements for appropriate reviews by specialists are not always followed. In the flood control cases we referred to, we found no evidence of review by a flood control specialist, although the review queue called for such a review and the Public Assistance Coordinator approved the project. We have modified our original language to recommend that FEMA develop internal controls to ensure consistent compliance with its eligibility review processes. FEMA suggested additional technical clarifications that we incorporated into the report, as appropriate. The full text of FEMA’s comments can be found in appendix II. To review the adequacy of the criteria FEMA uses to formulate a recommendation to the President on whether a presidential disaster declaration is warranted and is consistent with regulatory requirements, we reviewed (1) the applicable laws, regulations, and FEMA policies on conducting preliminary damage assessments; (2) FEMA’s efforts to develop criteria for reviewing requests of presidential disaster declarations; and (3) relevant GAO and FEMA Office of Inspector General reports. In addition, we analyzed available data on damage estimates to identify any minimum criteria that might have been used to recommend a disaster declaration. We also analyzed available data on damage estimates to determine whether disasters met statewide financial criteria for recommending a disaster declaration. We did not, however, perform independent assessments of the degree to which individual disasters met other qualitative criteria, such as significant localized impact or a recent history of multiple disasters. To determine whether FEMA ensures that proposed Public Assistance projects meet eligibility criteria, we (1) reviewed FEMA’s Public Assistance program policies, procedures, and guidance; (2) assessed the extent to which the program’s policies and procedures were disseminated and made available to staff that make eligibility determinations; (3) analyzed the availability of the training provided to staff; and (4) reviewed files on selected projects and interviewed managers and staff to assess how effectively the program’s policies and procedures were used to determine eligibility. We also reviewed FEMA’s internal controls and oversight processes to determine whether they provided adequate assurance that disaster funds were consistently used in an effective and efficient manner. For instance, we (1) looked for any oversight or reviews to verify that project worksheets prepared for proposed projects complied with policy; (2) reviewed the timeliness and adequacy of efforts to validate and certify completed projects and (3) assessed FEMA’s efforts to identify recurring systemic problems and take corrective actions to minimize them in future disasters. In addition, we obtained access to the program’s case management file to review the available documentation supporting project eligibility determinations. In conducting our review, we interviewed officials in FEMA’s Response and Recovery Directorate, Infrastructure Support Division in Washington, D.C., and the equivalent FEMA personnel in three regional offices – Atlanta, Georgia; Denton, Texas; and San Francisco, California. In addition, we conducted a structured telephone interview with the Infrastructure Branch chiefs in 9 of FEMA’s 10 regional offices and its Caribbean office. We also interviewed officials from FEMA’s Information Technology Services Directorate and auditors from its Office of Inspector General. We performed our work from August 2000 through August 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees; the Director of the Federal Emergency Management Agency; and the Director of the Office of Management and Budget. We will also make copies available to others upon request. If you have any questions about this report, please contact Robert E. White or me at (202) 512-2834. Over the last several years, the Congress has tried several times to have FEMA establish clear criteria for evaluating a governor’s request for a disaster declaration by better defining the state’s capability to respond to a disaster. In 1993, the House Appropriations Committee noted “the tendency on the part of the Federal government to declare more and more disasters to be eligible for disaster assistance funds” and directed FEMA to provide a detailed cost/benefit analysis to the Committee by October 1, 1993. FEMA responded, in September 1993, that a reliable cost/benefit analysis of the disaster declaration process would not be possible. In addition, the Vice President’s National Performance Review noted that FEMA needed to develop objective indicators of what constituted a major disaster. The National Performance Review further noted that those indicators should account for both the costs of the disaster and a state’s ability to meet those costs. The Senate Bipartisan Task Force on Funding Disaster Relief, established by Public Law 103-211, noted in its report that one approach to modifying federal disaster assistance and possibly reducing federal disaster assistance costs would be to establish “more explicit and/or stringent criteria for providing Federal disaster assistance.” That report also cited a 1994 FEMA Office of Inspector General report stating that (1) neither a governor’s findings nor FEMA’s analysis of capability were supported by standard factual data or related to published criteria and (2) FEMA’s process did not ensure equity in disaster decisions because the agency did not always review requests for declarations in the context of previous declarations. In September 1995, we reported that although the Stafford Act did not specify criteria for evaluating a governor’s request for a declaration, FEMA used an informal process that generally considered various factors in making a recommendation to the President. Some of the factors FEMA considered were the number of homes that were destroyed or sustained major damage, the extent to which the damage was concentrated or dispersed, the total estimated cost to repair the damage, the extent to which the damage was covered by insurance, the level of assistance available from other federal agencies, the state and local governments’ abilities to deal with disasters, the level of assistance available from voluntary organizations, the extent of health and safety problems, and the extent of damage to facilities providing essential services (e.g., medical and police services and utilities). The Senate Appropriations Committee remarked on the lack of specific disaster declaration criteria in its report on FEMA’s appropriations for fiscal year 1999. In that report, the Committee directed FEMA to make several administrative changes to reduce disaster relief costs, including the development of specific disaster declaration criteria. To develop specific declaration criteria, FEMA formed a working group with the National Emergency Management Association. However, FEMA faced a legislative restriction precluding any geographic area from receiving assistance “solely by virtue of an arithmetic formula or sliding scale based on income or population.” This working group developed several indicators for evaluating governors’ requests for disaster declarations and issued these indicators as a concept paper in September 1998. On January 26, 1999, FEMA published the proposed declaration criteria in the Federal Register. Those proposed rules were similar to the indicators FEMA had used informally. At the request of the Senate Appropriations Subcommittee, FEMA’s Office of Inspector General reviewed the proposed regulations and issued a report in March 1999. The report questioned FEMA’s use of a fixed per- capita figure as a means to determine a state’s capability and noted that, without the means to measure this capability, FEMA’s ability to determine whether disaster assistance was warranted was “hampered if not negated altogether.” The report recommended that FEMA use total taxable resources in place of its per-capita cost measure to better reflect the state’s economic health and ability to raise public revenues to cover the costs of a disaster. Additionally, the report identified numerous ways to improve the proposed declaration regulations, including publishing a county-level per-capita measure that could be used as an indicator to better establish the disaster’s localized impact; specifying how FEMA intended to determine the amount of insurance coverage and the source(s) of its information, as well as clarifying how insurance deductibles would be measured; citing the criteria that would be used when considering state and local mitigation measures to evaluate the need for assistance, including record- keeping requirements to support states’ claims; and prescribing a limit on the impact of multiple past disasters, including further defining what other events and emergencies could be included or excluded from FEMA’s declaration evaluation, as well as establishing state record-keeping requirements. The Inspector General’s report further noted that a significant number of disasters were declared, even though the estimated costs of disaster damage fell below the statewide financial thresholds historically used by FEMA. Specifically, the report stated it examined 192 declarations for the 10-year period from October 1988 through September 1998 and found that 40 percent were declared even though the state per-capita damage figure had not met FEMA’s statewide financial threshold of $1 per capita of damages. The report also identified the following as most common factors for recommending a declaration when the disaster cost estimates were below the minimum financial criteria. These included Special populations (e.g., poor, elderly) lived in affected areas. Preliminary damage assessments were ongoing, and the cost estimates were not yet complete. The disaster had a heavy localized impact. The state had no assistance program. Despite the Inspector General’s report and other criticisms, FEMA published its final rule on September 1, 1999. The criteria for disaster declarations remained substantially unchanged from those the agency proposed in January. FEMA has also begun to address the issue of insurance requirements for public buildings. In July 1999, FEMA submitted draft regulations to the Office of Management and Budget proposing that, under the Public Assistance program, grant funding for buildings damaged in a disaster be made available only to state and local agencies and other public entities that maintain specified minimum levels of insurance coverage. Currently, the Public Assistance program requires insurance coverage only as a postdisaster condition. If a public facility is not insured when a disaster strikes, the responsible agency must agree to procure insurance against future disasters as a condition for receiving FEMA assistance. After receiving comments from us and others, FEMA decided to wait for the completion of a comprehensive study of what insurance requirements are reasonable before proceeding further. In its report on FEMA’s fiscal year 2000 appropriations, the Senate Appropriations Committee also expressed concern that the indicators the agency proposed to guide declaration recommendations were “no more stringent than those used in the past.” The Committee further noted that it expected FEMA to apply the criteria it had published in a consistent manner and to strengthen the criteria over time, while recognizing the need to maintain some flexibility for unique circumstances. The following are GAO’s comments on the Federal Emergency Management Agency’s letter dated August 17, 2001. 1. While figure 1 indicates that the number and estimated cost of disasters involving Public Assistance funding have both declined recently, we note that, because disasters are extremely variable in their frequency and severity, it is premature to suggest that the recent decline in their number and cost constitutes a downward trend that could be expected to continue. Furthermore, our intent is to ensure that the criteria for both disaster declarations and eligibility determinations are appropriate to each case, independent of any trends in the aggregate cost of disasters. 2. The draft has been modified to clarify the relationship between a disaster declaration and other federal assistance programs. 3. Suggested deletion accepted. 4. Sentence has been clarified. 5. Word inserted. 6. Suggested examples included. 7. See response to comment 8. 8. As our report recommended, we believe that FEMA should consider giving higher priority to implementing a credentialing program such as the one the agency has designed. The program would establish both training and experience requirements appropriate to each level of federal disaster staff. Such a program would also include a comprehensive recordkeeping system that would ensure that all staff meet these requirements. 9. The requirement that all projects with special considerations receive appropriate reviews by specialists is apparently not always followed. For example, in a major disaster we refer to, we found no evidence of review by a flood control specialist, although the review queue called for such a review and the Public Assistance Coordinator approved the projects. 10. See response to comment 9. The final sentence has been modified to clarify the respective roles of FEMA and the U.S. Army Corps of Engineers. 11. Under FEMA procedures, the reviewer is required to complete a validation worksheet identifying the projects reviewed and any associated eligibility and cost variances. Our case review of both paper files and NEMIS records found very few validation worksheets. FEMA field personnel also acknowledged that the validation process is not always conducted as required. 12. Word deleted. 13. See response to comment 9. We have modified the recommendation to focus on compliance with, rather than the development of, a policy that ensures appropriate specialist reviews. In addition to those named above, Patricia Moore, Richard B. Smith, Thomas Barger, Jr., Curtis L. Groves, and John Vocino made key contributions to this report.
Since 1990, the Federal Emergency Management Agency (FEMA) has provided more than $27 billion in disaster assistance, more than half of which was spent for public assistance projects, such as repairs of damaged roads, government buildings, utilities, and hospitals. FEMA uses established criteria to determine whether to (1) recommend that the President declare a disaster and (2) once a disaster has been declared, approve and fund Public Assistance projects. In 1999, FEMA published formal criteria for recommending the presidential approval of disaster declarations. These criteria include both minimum financial thresholds and other qualitative measures that FEMA applies in deciding whether to recommend presidential approval. These criteria do not necessarily indicate a state's ability to pay for the damage because they do not consider the substantial differences in states' financial capacities to respond when disasters occur. As a result, federal funds may be provided for some disasters when they are not needed. Problems with applying FEMA's criteria remain. In part, these problems may persist because many of the staff assigned to disaster field offices who make eligibility decisions are temporary and may not have the skills and training needed to make appropriate decisions. FEMA has developed a credentialing program to establish qualifications and training requirements for these staff but has not implemented this program. FEMA officials said that budgetary and programmatic factors have delayed implementation. In addition, FEMA's review process does not ensure that all projects are reviewed by the most knowledgeable officials. FEMA also lacks centralized, quantified information that would be helpful for managing the Public Assistance program. Its information system--essentially an electronic filing cabinet--stores information project by project and does not provide effectively for programwide analysis. Furthermore, the system is unreliable and difficult to use, according to FEMA officials. As a result, data are lost or never entered.
Despite several revisions to schedule milestones since the program’s inception, the Chem-Demil Program still is unable to meet these milestones because of unanticipated delays. Most incineration sites have missed important milestones established in 2001. Delays at Anniston, Umatilla, and Pine Bluff have already resulted in their missing the 2001 schedule milestones to begin chemical agent destruction operations (operations phase). Johnston Atoll has missed its schedule milestone for shutting down the facility (closure phase). Although Tooele has not missed any milestones since the 2001 schedule was issued, the site has undergone substantial delays in destroying its stockpile primarily because of a safety-related incident in July 2002. If additional delays occur at the Tooele site, it could also exceed its next milestone as well. Table 1 shows the status of the incineration sites that will miss 2001 schedule milestones. Many of the recent delays at the incineration sites have resulted from operations incidents, from environmental permitting problems, community protection concerns, and funding issues—a trend that we identified in previous reports on the program. Among the events that have caused delays at incineration sites since 2001 are the following: Incidents during operations. At Tooele, a chemical incident involving a plant worker who came into contact with a nerve agent while performing routine maintenance led to the suspension of agent destruction operations from July 2002 to March 2003. An investigation attributed the incident to inadequate or poorly followed worker safety procedures, and a corrective action plan, including an improved safety plan, was instituted before operations resumed. Since operations restarted in March 2003, Tooele has experienced several temporary shutdowns. Environmental permitting. Several environmental permitting issues have delayed the start of agent destruction operations at sites at Umatilla and Anniston. At Umatilla, the delays stemmed from several unanticipated engineering changes related to reprogramming software and design changes that required permit modifications and to a shutdown by state regulators because furnaces were producing an unanticipated high amount of heavy metals during surrogate agent testing. At Anniston, delays occurred because state environmental regulators did not accept test results for one of the furnaces because the subcontractor did not follow state permit-specified protocols. Community protection. Concerns about emergency preparedness for local communities have led to additional delays at Anniston. These concerns included the inadequacy of protection plans for area schools and for special needs residents (e.g., elderly and disabled individuals) who would have difficulty in an evacuation. Although we reported on this issue in July 1996 and again in August 2001, and a senior DOD official identified it as a key concern in September 2001, the Army had difficulty satisfactorily resolving the issue with key state stakeholders. As a result, operations did not begin until August 2003. Funding. Delays at Pine Bluff and Johnston Atoll occurred because DOD redirected fiscal year 2002 destruction program funds to acquire $40.5 million worth of additional emergency protection equipment for Anniston. To cover this unfunded budget expense, the Army reduced Pine Bluff’s budget by $14.9 million and Johnston Atoll’s budget by $25.1 million, leading to systemization and closure milestone slippages, respectively, at these sites. Program officials told us that the total cost of this schedule slip would ultimately be $116 million due to the extended period before closure. The program is likely to face unfunded requirements as programwide funding requests continue to exceed budgeted amounts. As of October 2003, according to preliminary estimates from FEMA, unfunded CSEPP requirements for all sites are expected to amount to $39.4 million and $49.0 million for fiscal years 2004 and 2005, respectively. Unlike the incineration sites, the two bulk-agent only sites, Aberdeen and Newport, have experienced delays but have not breeched their schedule milestones. In 2002, DOD approved using an alternative technology (neutralization), instead of incineration, at these two sites. This technology is expected to accelerate the rate of destruction at these two sites. The Army estimated that this process would reduce the scheduled end of operations at both sites by 5 years, from 2008 to 2003 at Aberdeen and from 2009 to 2004 at Newport. However, Aberdeen has encountered unanticipated problems with the removal of residual agent from bulk containers and has extended its planned completion date by 6 months, from October 2003 to March 2004. In addition, Newport has faced construction delays and community resistance to offsite treatment of waste byproducts. As a result of these delays, Newport has extended its planned start date for agent operations by 5 months, from October 2003 to February 2004. At two sites, Pueblo, Colorado, and Blue Grass, Kentucky, no milestones were set in the 2001 schedule because DOD had not yet selected a destruction technology. DOD has now selected a destruction technology for these sites, but it made decisions several months later than estimated. More importantly, DOD has set initial schedule milestones for these two sites that go beyond the extended April 2012 CWC deadline. According to DOD officials, these milestones are preliminary and will be reevaluated once contractors finish initial facility designs. The Chem-Demil Program has faced continued delays with the program largely because DOD and the Army have not yet developed a risk management approach to proactively anticipate and address potential problems that could adversely affect program schedules, costs, and safety. Such an approach could also leverage knowledge of potential problems gained at other sites. Instead, according to a DOD official, the program has used a crisis management approach, which has forced it to react to, rather than control, issues. The program had drafted a plan in June 2000 that was intended to address these issues. However, according to a program official, this plan was never approved or implemented because of a change in management in 2001. The delays and schedule extensions have contributed directly to program cost growth, according to program officials. As a result, DOD’s total program cost estimate grew from $15 billion to $24 billion between 1998 and 2001. (See fig. 1.) Because of delays encountered since the 2001 revisions, the Army is now in the process of developing new milestones that will extend beyond those adopted in 2001. According to an Army official, the program will use events that have occurred since 2001 in presenting new cost estimates to DOD for preparation of the fiscal year 2005 budget submission. Program officials told us that they estimate new costs had increased by $1.4 billion as of October 2003, and this estimate is likely to rise further as additional factors are considered. Although the United States met the first two chemical weapons treaty deadlines, the continuing delays jeopardize its ability to meet the final two deadlines. (See table 2.) Since reaching the 2002 deadline to destroy 20 percent of the stockpile in July 2001, the Chem-Demil Program has been able to destroy only an additional 3 percent of the stockpile. In order to meet the April 2004 CWC deadline to destroy 45 percent of the stockpile, the program would have to eliminate an additional 22 percent of the stockpile within the next 6 months. Because the program will likely not be able to achieve this rate of destruction, the United States has asked for an extension of the 2004 deadline. According to current destruction schedules, the United States will not meet the 2007 deadline to eliminate 100 percent of the stockpile. As a result, the United States will likely have to ask for an extension of the 2007 deadline to complete the destruction of the entire stockpile. The CWC allows extensions of up to 5 years beyond the 2007 deadline. Unless the program fixes the problems that are causing schedule delays, the United States also risks not meeting this deadline, if extended to 2012. Despite recent efforts to improve the management and streamline the organization of the Chem-Demil Program, the program continues to falter because several long-standing leadership, organizational, and strategic planning weaknesses remain unresolved. The lack of sustained leadership has undercut decision-making authority and obscured accountability. The program’s complex structure, with many lines of authority, has left roles and responsibilities unclear. Finally, the program lacks an overarching, comprehensive strategy to guide and integrate its activities and monitor performance. The Chem-Demil Program’s lack of sustained leadership above the program level is underscored by the multiple shifts in oversight responsibilities that have occurred three times between DOD and the Army during the past two decades. The most recent change took place in 2001 when oversight responsibility for the program shifted back to DOD’s Office of the Secretary of Defense. Table 3 summarizes the changes. These shifts in oversight responsibilities affected the continuity of program decision making and obscured accountability. As a different office assumed major decision authority, the program’s emphasis shifted and initiatives that had been started were often not completed. For example, when the Army had oversight responsibility for the program, it established a memorandum of understanding with FEMA to clarify each of their roles and responsibilities related to CSEPP. However, after DOD assumed the program’s oversight responsibilities in 2001, DOD did not follow the protocols for coordination that had been established in the memorandum, according to FEMA and DOD officials. As a result, DOD provided funds for emergency preparedness items without having adequate plans for distribution, which delayed the process. This shift in oversight responsibilities from the Army to DOD also left state and local community officials and other stakeholders uncertain as to the credibility of federal officials. According to FEMA and Army officials, coordination between the two agencies has improved in the last few months and efforts are being made to repair relationships with community and state stakeholders. Similar problems have also occurred within the Army as program leadership has changed. Three different officials at the Assistant Secretary level have held senior leadership positions since December 2001. In addition, five officials have served as the Deputy Assistant Secretary of the Army (Chem-Demil) during that time. From April 2002 to February 2003, the program manager’s position remained vacant for nearly 1 year, before being filled. However, after only 4 months, the program manager resigned and the Army named a replacement. Frequent shifts in key leadership positions have led to several instances where the lack of continuity affected decision making and obscured accountability. For example, in June 2002, a program official promised to support future funding requests for emergency preparedness equipment from one community, but his successor did not fulfill this promise. Other communities viewed the agreement with one community as an opportunity to substantially expand their own funding requests. The lack of sustained leadership makes it unclear who is accountable when program commitments are made and not fulfilled. Moreover, when key leaders do not remain in their positions to develop the needed long-term perspective on program issues and effectively implement program initiatives, it is difficult to maintain program progress and ensure accountability for leadership actions. As our 2003 report documents, the Army recently reorganized the program. But this change in management structure has not streamlined the program’s complex organization nor clarified roles and responsibilities. The establishment of the Chemical Materials Agency (CMA) in January 2003 has left the Director reporting to two different senior Army organizations, which is one more than under the previous structure. This divided reporting approach is still not fully developed, but has the potential to adversely affect program coordination and accountability. The reorganization has also divided the responsibility for various program phases between two offices within CMA. One organization, the Program Manager for the Elimination of Chemical Weapons, will manage the first three phases (design, construction, and systemization) for each site, and a newly created organization, the Director of Operations, will manage the final two phases (operations and closure). This reorganization changes the cradle-to-grave management approach that was used to manage sites in the past and has blurred responsibilities for officials who previously provided support in areas such as quality assurance and safety. Moreover, the reorganization did not address two program components—Assembled Chemical Weapons Alternatives (ACWA) program and community-related CSEPP. DOD will continue to manage ACWA separately from the Army, as congressionally directed. In addition, the Army will continue to manage CSEPP jointly with FEMA. While DOD and the Army have issued numerous policies and guidance documents for the Chem-Demil Program, they have not developed an overarching, comprehensive strategy or an implementation plan to guide the program and monitor its progress. This is contrary to the principals that leading organizations embrace to effectively implement and manage programs. Some key aspects of an approach typically used to effectively manage programs include promulgating a comprehensive strategy that includes a clearly stated mission, long-term goals, and methods to accomplish these goals. An implementation plan that includes annual performance goals, measurable performance indicators, and evaluation and corrective action plans is also important. According to DOD and Army officials, the Chem-Demil Program has relied primarily on guidance and planning documents related to the acquisition process. However, in response to our recent recommendation that they prepare such a strategy and plan, DOD stated that it is in the initial stages of doing so and estimates completion in fiscal year 2004. Since our 2001 report, the Army and FEMA have assisted state and local communities to become better prepared to respond to chemical emergencies. Based on the states’ self-assessments and FEMA’s reviews, all 10 states with chemical storage sites located within them or nearby are now considered close to being fully prepared to respond to a chemical emergency. This is a marked improvement from the status we reported in 2001 when 3 states reported that they were far from being prepared. Now, 6 of the 10 states are reporting that their status is fully prepared and the remaining 4 are close to being fully prepared. However, these statuses are subject to change because the states and communities themselves can revise or expand their agreed-upon emergency preparedness needs. They can make these changes because the “maximum protection” concept that governs CSEPP is open to interpretation. As a result, they can appear to be less prepared than before. For example, Oregon certified that it was fully prepared, but now has requested additional emergency equipment. This request has changed Oregon’s self-reported preparedness status from fully prepared to incomplete. Despite these accomplishments, CSEPP costs continue to rise because, according to Army and FEMA officials, state and local communities may add to their emergency requirements beyond approved requests. Army and FEMA officials explain that the states often identify and expand their requirements, especially as destruction facilities move closer to the start of the operations phase. For example, the states of Colorado, Alabama, and Oregon have all requested funds for infrastructure, including roads and bridges. In June 2002, Oregon certified that its community readiness was adequate and recommended permit approval to allow test burns at Umatilla. Since that time, Oregon has asked for additional emergency preparedness support that exceeds its CSEPP budget. This request follows a pattern of substantially increasing funding requests at the start of the operations phase, as occurred at Anniston in 2001 when it received $40.5 million for additional CSEPP items. Programwide, new requirements continue to exceed approved CSEPP funding levels. FEMA has little control over the additional funding requests made by the states. As of October 2003, FEMA had identified $39.4 and $49.0 million in unfunded requirements for fiscal years 2004 and 2005, respectively. (See table 4.) In our August 2001 report, we recommended that the Army and FEMA (1) provide technical assistance, guidance, and leadership to the three states (Alabama, Indiana, and Kentucky) with long-standing emergency preparedness issues to resolve their concerns; (2) provide all states and their communities with training and assistance in preparing budget and life-cycle cost estimates and provide guidance and plans on reentry; and (3) establish specific measures of compliance with the benchmarks to more evenly assess performance and to correctly identify requirements. The Army is continuing to provide assistance to CSEPP states and communities as requested by FEMA. FEMA now participates more often in local community CSEPP activities and sponsors an annual CSEPP conference in an effort to improve its working relationships. FEMA has also provided software to simplify development of CSEPP financial reporting documents and has published a Reentry and Recovery Workbook. The workbook fills a void in state and local guidance for emergency responders to follow in the event of a chemical emergency. Lastly, FEMA expanded its capability assessment readiness tool to assist local communities in quantifying benchmark scores. We recommended in our September 2003 report that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics, in conjunction with the Secretary of the Army, to (1) develop an overall strategy and implementation plan for the chemical demilitarization program and (2) implement a risk management approach that anticipates and influences internal and external factors that could adversely impact program performance. DOD concurred with our recommendations. It said that it was in the initial stages of developing an overall strategy and implementation plan and estimated that it would be completed in fiscal year 2004. It also said that CMA will review the progress of an evaluation of several components of its risk management approach within 120 days and then that DOD would evaluate the results and determine any appropriate action. In our 2001 report, we recommended that the Army and FEMA make improvements to the program, and they have implemented those recommendations. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or members of the Subcommittee may have. For future questions regarding this testimony, please contact me at (202) 512-4300. Individuals making key contributions to this testimony include Donald Snyder, Rodell Anderson, Bonita Oden, John Buehler, Nancy Benco, and Mike Zola. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since its inception in 1985,the Chemical Demilitarization (Chem-Demil) Program has been charged with destroying the nation's large chemical weapons stockpile. After years of planning and building new facilities, the program started destroying the stockpile in 1990. As of October 2003, the program had destroyed 26 percent of the 31,500-ton agent stockpile, and its total estimated cost to destroy the entire stockpile is more than $25 billion. This testimony summarizes GAO's September 2003 report and addresses the following issues: (1) the status of schedule milestones and cost estimates, (2) the impact of the current schedule on the Chemical Weapons Convention (CWC) deadlines, (3) the challenges associated with managing the program, and (4) the status of the Chemical Stockpile Emergency Preparedness Program (CSEPP). The Chem-Demil Program faces schedule delays and higher costs, but it has improved emergency preparedness in communities near the sites. In 2001, the Chem-Demil Program extended its schedule milestones and increased its cost estimates from $15 billion to about $24 billion. Since then nearly all sites have experienced delays,stemming from problems such as: plant safety issues,environmental requirements, approving emergency preparedness plans, and funding shortfalls. The program needs a risk management plan to mitigate problems affecting program schedules, costs, and safety. Program officials say the delays have raised the cost estimates by an additional $1.4 billion, to more than $25 billion as of September 2003. Based on current schedule slippages, GAO believes that costs will grow higher and further delays will occur. Because of schedule delays, the United States will not meet CWC's April 2004 deadline to destroy 45 percent of the stockpile and it risks not meeting the original 2007 deadline to complete destruction of the entire stockpile. Unless the program fixes the problems causing delays,the United States also risks not meeting CWC's deadline of 2012, if extended. The program has suffered from several long-standing management and organizational issues. The lack of sustained leadership has undercut decision-making authority and obscured accountability. The program's complex structure, with multiple lines of authority, has left roles and responsibilities unclear. It does not have an overarching, comprehensive strategy to guide and integrate its activities and monitor its performance. The Army and the Federal Emergency Management Agency have helped state and local communities become better prepared to respond to chemical emergencies. Despite these gains, CSEPP costs are rising because some states have expanded their preparedness requests beyond the approved budgets. These requests amount to $88 million for fiscal years 2004 and 2005.
Both DB and DC plans operate in a voluntary system with tax incentives for employers to offer a plan and for employees to participate. In the past, DC plans, such as 401(k) plans, were supplemental to DB plans. However, over the past several decades, there has been a shift in pension plan coverage; the number of DC plans has increased while the number of DB plans has declined. Today, DC plans are the dominant type of private- sector employee pension. Compared to DB plans, DC plans offer workers more control over their retirement asset management and greater portability over their retirement savings, but also shift much of the responsibility and certain risks onto workers. Workers generally must elect to participate in a plan and accumulate savings in their individual accounts by making regular contributions over their careers. Participants typically choose how to invest plan assets from a range of options provided under their plan and accordingly face investment risk. There are several different categories of DC plans, but most are types of cash or deferred arrangements in which employees can direct pre-tax dollars, along with any employer contributions, into an account, with any asset growth tax-deferred until withdrawal. One option available under some 401(k) plans is automatic enrollment, under which workers are enrolled in a 401(k) plan automatically, unless they explicitly choose to opt out. However, automatic enrollment has not been a traditional feature of 401(k) plans and, prior to 1998, plan sponsors feared that adopting automatic enrollment could lead to plan disqualification. In 1998, the Internal Revenue Service (IRS) addressed this issue by stating that a plan sponsor could automatically enroll newly hired employees and, in 2000, clarified that automatic enrollment is permissible for current employees who have not enrolled. Nonetheless, a number of considerations inhibited widespread adoption of automatic enrollment, including remaining concerns such as liability in the event that the employee’s investments under the plan did not perform satisfactorily, and concerns about state laws that prohibit withholding employee pay without written employee consent. More recently, provisions of the Pension Protection Act of 2006 (PPA) and subsequent regulations further facilitated the adoption of automatic enrollment by providing incentives for doing so and by protecting plans from fiduciary and legal liability if certain conditions are met. In September 2009, the Department of the Treasury announced IRS actions designed to further promote automatic enrollment and the use of automatic escalation policies. The Employee Retirement Income Security Act of 1974 (ERISA), as amended, defines and sets certain standards for employee benefit plans, including 401(k) plans, sponsored by private-sector employers. ERISA establishes the responsibilities of employee benefit plan decision makers and the requirements for disclosing information about plans. ERISA requires that plan fiduciaries, which generally include the plan sponsor, carry out their responsibilities prudently and do so solely in the interest of the plan’s participants and beneficiaries. The Department of Labor’s (Labor) Employee Benefits Security Administration (EBSA) is the primary agency responsible for enforcing Title I of ERISA and thereby protecting private-sector pension plan participants and beneficiaries from the misuse or theft of pension assets. EBSA conducts civil and criminal investigations of plan fiduciaries and service providers to determine whether the provisions of ERISA or other relevant federal laws have been violated. In addition to Labor’s oversight, the Securities and Exchange Commission (SEC) provides oversight for 401(k) investments. For example, the SEC, among other responsibilities, regulates registered securities including company stock and mutual funds under securities law. One issue of concern with DC plans is that participation and saving rates have been low. In 2007, we reported that the majority of U.S. workers, in all age groups, did not participate in DC plans with their current employers. In fact, only about half of all workers participate in any type of employer-sponsored retirement plan at any given time. According to data from the Current Population Survey, about 48 percent of the total U.S. workforce was not covered by an employer-sponsored plan in 2007. About 40 percent worked for an employer that did not sponsor a plan, and about 8 percent did not participate in the plan that their employer sponsored. Certain segments of the working population have consistently had much lower rates of employment with employers sponsoring a plan, and lower participation rates than the working population overall, such as lower-income workers, younger workers, workers employed by smaller companies, and part-time workers who typically lack coverage compared to all full-time workers. According to our analysis of the 2004 Survey of Consumer Finances, only 62 percent of workers were offered a retirement plan by their employer, and 84 percent of those offered a retirement plan participated. Participation rates were even lower for DC plan participants since only 36 percent of working individuals participated in a DC plan with their current employers at the time of our report. Although our analysis focused on DC plans as a group, 401(k) plans make up the vast majority of DC plans. At the household level, participation rates were also low; only 42 percent of households had at least one member actively participating in a DC plan. Further, only 8 percent of workers in the lowest income quartile participated in DC plans offered by their current employer. Participation rates are low partly because not all employers offer a retirement plan, and even when employers offer such plans, workers may not participate. Some small employers are hesitant to sponsor retirement plans because of concerns about cost. In addition, DC participation rates for the U.S. workforce may be low because some employers sponsor a DB plan rather than a DC plan. When companies do sponsor employer plans, some workers may not be eligible to participate in their employers’ plan because they have not met the plan’s minimum participation requirements. In addition, workers may choose not to enroll, or delay enrolling, in a retirement plan for a number of reasons. For example, they may think—in some cases, incorrectly—they are not eligible. They may also believe they cannot afford to contribute to the plan and, for low-income workers, it may be difficult for them to contribute. Also, some may be focused on more immediate savings objectives, such as saving for a house. Many non- participants may not have made a specific decision, but rather fail to participate because of a tendency to procrastinate and follow the path that does not require an active decision. We also found that, for workers who participated in DC plans, plan savings were low. The median total DC account balance was $22,800 for individual workers with a current or former DC plan and $27,940 for households with a current or former DC plan. We reported that the account balances of lower-income and older workers were of particular concern. For example, workers in the lowest income quartile had a median total account balance of only $6,400. Older workers, particularly those who were less wealthy, also had limited retirement savings. For example, those aged 50 through 59 and at or below the median level of wealth had median total savings of only $13,800. The median total savings for all workers aged 50 through 59 was $43,200. We noted that the low level of retirement savings could be occurring for a couple of reasons. Workers who participated in a plan had modest overall balances in DC plans, suggesting a potentially small contribution toward retirement security for most plan participants and their households. For individuals nearing retirement age, total DC plan balances were also low, because DC plans were less common before the 1980s and older workers likely would not have had access to these plans their whole careers. Given trends in coverage since the 1980s, older workers close to retirement age were more likely than younger ones to have accrued retirement benefits in a DB plan. In addition, older workers who rely on DC plans for retirement income may also not have time to substantially increase their total savings without extending their working careers, perhaps for several years. Further, the value of the income tax deferral on contributions is smaller for lower-income workers than for similarly situated higher-income workers, making participation less appealing for lower-income workers. In addition to somewhat small savings contributions, 401(k) participants can take actions, such as taking loans, withdrawals, or lump-sum cashouts, that reduce the savings they have accumulated. This “leakage” continues to affect the retirement security of some participants. While participants may find features that allow access to 401(k) savings prior to retirement desirable, leakage can result in significant losses of retirement savings from the loss of compound interest as well as the financial penalties associated with early withdrawals. Current law limits participant access to 401(k) savings in order to preserve the favorable tax treatment for retirement savings and ensure that the savings are, in fact, being used to provide retirement income. The incidence and amount of the principal forms of leakage from 401(k) plans have remained relatively steady through the end of 2008. For example, we found that approximately 15 percent of 401(k) participants between the ages of 15 and 60 initiated at least one form of leakage in 1998, 2003, and 2006, with loans being the most popular type of leakage in all 3 years. We also found that cashouts made when a worker changed jobs, at any age, resulted in the largest amounts of leakage and the greatest proportional loss in retirement savings. Further, we reported that while most firms informed participants about the short-term costs of leakage, few informed them about the long-term costs. As we reported in August of 2009, experts identified three legal requirements that had likely reduced the overall incidence and amounts of leakage, and another provision that may have exacerbated the long-term effects of leakage. Specifically, experts noted that the requirements imposing a 10 percent tax penalty on most withdrawals taken before age 59½, requiring participants to exhaust their plan’s loan provisions before taking a hardship withdrawal and requiring plan sponsors to preserve the tax-deferred status of accounts with balances of more than $1,000 at job separation all helped reduce 401(k) leakage. However, experts also noted that the requirement for a 6-month suspension of all contributions to an account following a hardship withdrawal exacerbated the effects of leakage. Treasury officials told us that this provision is intended to serve as a test to ensure that the hardship is real and that the participants have no other assets available to address the hardship. However, a few outside experts believed that this provision deters hardship withdrawals and noted that it seems to contradict the goal of creating retirement income. One expert noted that the provision unnecessarily prevented participants who were able to continue making contributions from doing so. For example, an employed participant taking a withdrawal for a discrete, one-time purpose, such as paying for medical expenses, may otherwise be able to continue making contributions. In our August 2009 report, we recommended that Congress consider changing the requirement for the 6- month contribution suspension following a hardship withdrawal. We also called for measures to provide participants with more information on the disadvantages of hardship withdrawals. Although participants may choose to take money out of their 401(k) plans, fees and other factors outside of participants’ control can also diminish their ability to build their retirement savings. Participants often pay fees, such as investment fees and record-keeping fees, and these fees may significantly reduce retirement savings, even with steady contributions and without leakage. Investment fees, which are charged by companies managing mutual funds and other investment products for all services related to operating the fund, comprise the majority of fees in 401(k) plans and are typically borne by participants. Plan record-keeping fees generally account for the next largest portion of plan fees. These fees cover the cost of various administrative activities carried out to maintain participant accounts. Although plan sponsors often pay for record-keeping fees, participants bear them in a growing number of plans. We previously reported that participants can be unaware that they pay any fees at all for their 401(k) investments. For example, investment and record-keeping fees are often charged indirectly by taking them out of investment returns prior to reporting those returns to participants. Consequently, more than 80 percent of 401(k) participants reported in a nationwide survey not knowing how much they pay in fees. The reduction to retirement savings resulting from fees is very sensitive to the size of the fees paid; even a seemingly small fee can have a large negative effect on savings in the long run. As shown in figure 1, an additional 1 percent annual charge for fees would significantly reduce an account balance at retirement. Although all 401(k) plans are required to provide disclosures on plan operations, participant accounts, and the plan’s financial status, they are often not required to disclose the fees borne by individual participants. These disclosures are provided in a piecemeal fashion and do not provide a simple way for participants to compare plan investment options and their fees. Some documents that contain fee information are provided to participants automatically, whereas others, such as prospectuses or fund profiles, may require that participants seek them out. According to industry professionals, participants may not know to seek such documents. Most industry professionals agree that information about investment fees—such as the expense ratio, a fund’s operating fees as a percentage of its assets—is fundamental for plan participants to compare their options. Participants also need to be aware of other types of fees—such as record- keeping fees and redemption fees or surrender charges imposed for changing and selling investments—to gain a more complete understanding of all the fees that can affect their account balances. Whether participants receive only basic expense ratio information or more detailed information on various fees, presenting the information in a clear, easily comparable format can help participants understand the content of disclosures. In our previous reports, we recommended that Congress consider requiring plan sponsors to disclose fee information on 401(k) investment options to participants, such as the expense ratios, and Congress has introduced several bills to address fee disclosures. SEC identified certain undisclosed arrangements in the business practices of pension consultants that the agency referred to as conflicts of interest and released a report in May 2005 that raised questions about whether some pension consultants are fully disclosing potential conflicts of interest that may affect the objectivity of the advice. The report highlighted concerns that compensation arrangements with brokers who sell mutual funds may provide incentives for pension consultants to recommend certain mutual funds to a 401(k) plan sponsor and create conflicts of interest that are not adequately disclosed to plan sponsors. Plan sponsors may not be aware of these arrangements and thus could select mutual funds recommended by the pension consultant over lower-cost alternatives. As a result, participants may have more limited investment options and may pay higher fees for these options than they otherwise would. Conflicts of interest among plan sponsors and plan service providers can also affect participants’ retirement savings. In our prior work on conflicts of interest in DB plans, we found a statistical association between inadequate disclosure of potential conflicts of interest and lower investment returns for ongoing plans, suggesting the possible adverse financial effect of such nondisclosure. Specifically, we detected lower annual rates of return for those ongoing plans associated with consultants that had failed to disclose significant conflicts of interest. These lower rates generally ranged from a statistically significant 1.2 to 1.3 percentage points over the 2000 to 2004 period. Although this work was done for DB plans, some of the same conflicts apply to DC plans as well. Problems may occur when companies providing services to a plan also receive compensation from other service providers. Without disclosing these arrangements, service providers may be steering plan sponsors toward investment products or services that may not be in the best interest of participants. Conflicts of interest may be especially hidden when there is a business arrangement between one 401(k) plan service provider and a third-party provider for services that they do not disclose to the plan sponsor. The problem with these business arrangements is that the plan sponsor will not know who is receiving the compensation and whether or not the compensation fairly represents the value of the service being rendered. Without that information, plan sponsors may not be able to identify potential conflicts of interest and fulfill their fiduciary duty. If the plan sponsors do not know that a third party is receiving these fees, they cannot monitor them, evaluate the worthiness of the compensation in view of services rendered, and take action as needed. Because the risk of 401(k) investments is largely borne by the individual participant, such hidden conflicts can affect participants directly by lowering investment returns. We previously recommended that Congress consider amending the law to explicitly require that 401(k) service providers disclose to plan sponsors the compensation that providers receive from other service providers. Although Congress has not changed the law, Labor has proposed regulations to expand fee and compensation disclosures to help address conflicts of interests. A recent change in law to facilitate automatic enrollment shows promise for increasing participation rates and savings. Under automatic enrollment, a worker is enrolled into the plan automatically, or by default, unless they explicitly choose to opt out. In addition, for participants who do not make their own choices, plan sponsors also establish default contribution rates—the portion of an employee’s salary that will be deposited in the plan—and a default investment fund—the fund or other vehicle into which deferred savings will be invested. The Pension Protection Act of 2006 and recent regulatory changes have facilitated plan sponsors’ adoption of automatic enrollment. In fact, plan sponsors have increasingly been adopting automatic enrollment policies in recent years. According to Fidelity Investments, the number of plans with automatic enrollment has increased from 1 percent in December 2004 to about 16 percent in March 2009, with higher rates of adoption among larger plan sponsors. Fidelity Investments estimates that 47 percent of all 401(k) participants are in plans with automatic enrollment. Employers may also adopt an automatic escalation policy, another policy intended to increase retirement savings. Under automatic escalation, in the absence of an employee indicating otherwise, an employee’s contribution rates would be automatically increased at periodic intervals, such as annually. For example, if the default contribution rate is 3 percent of pay, a plan sponsor may choose to increase an employee’s rate of saving by 1 percent per year, up to some maximum, such as 6 percent. One of our recent reports found that automatic enrollment policies can result in considerably increased participation rates for plans adopting them, with some plans’ participation rates increasing to as high as 95 percent and that these high participation rates appeared to persist over time. Moreover, automatic enrollment had a significant effect on subgroups of workers with relatively low participation rates, such as lower-income and younger workers. For example, according to a 2007 Fidelity Investments study, only 30 percent of workers aged 20 to 29 were participating in plans without automatic enrollment. In plans with automatic enrollment, the participation rate for workers in that age range was 77 percent, a difference of 47 percentage points. Automatic enrollment, through its default contribution rates and default investment vehicles, offers an easy way to start saving because participants do not need to decide how much to contribute and how to invest these contributions unless they are interested in doing so. However, current evidence is mixed with regard to the extent to which plan sponsors with automatic enrollment have also adopted automatic escalation policies. In addition, many plan sponsors have adopted relatively low default contribution rates. While the adoption rate for automatic enrollment shows promise, a lag in adoption of automatic escalation policies, in combination with low default contribution rates, could result in low saving rates for participants who do not increase contribution rates over time. Another recent GAO report offers additional evidence about the positive impact automatic enrollment could have on workers’ savings levels at retirement. Specifically, we projected DC pension benefits for a stylized scenario where all employers that did not offer a pension plan were required to sponsor a DC plan with no employer contribution; that is, workers had universal access to a DC plan. When we coupled universal access with automatic enrollment, we found that approximately 91 percent of workers would have DC savings at retirement. Further, we found that about 84 percent of workers in the lowest income quartile would have accumulated DC savings. In our work on automatic enrollment, we found that plan sponsors have overwhelmingly adopted TDFs as the default investment. TDFs allocate their investments among various asset classes and shift that allocation from equity investments to fixed-income and money market investments as a “target” retirement date approaches; this shift in asset allocation is commonly referred to as the fund’s “glide path.” Recent evidence suggests that participants who are automatically enrolled in plans with TDF defaults tend to have a high concentration of their savings in these funds. However, pension industry experts have raised questions about the risks of TDFs. For example, some TDFs designed for those expecting to retire in or around 2010 lost 25 percent or more in value following the 2008 stock market decline, leading some to question how plan sponsors evaluate, monitor, and use TDFs. GAO will be addressing a request from this committee to examine some of these concerns. DC plans, particularly 401(k) plans, have clearly overtaken DB plans as the principal retirement plan for U.S. workers and are likely to become the sole retirement savings plan for most current and future workers. Yet, 401(k) plans face major challenges, not least of which is the fact that many employers do not offer employer-sponsored 401(k) plans or any other type of plan to their workers. This lack of coverage, coupled with the fact that participants in 401(k) plans sometimes spend their savings prior to retirement or have their retirement savings eroded by fees, make it evident that, without some changes, a large number of people will retire with little or no retirement savings. Employers, workers, and the government all have to work together to ensure that 401(k) plans provide a meaningful contribution to retirement security. Employers have a role in first sponsoring 401(k) plans and then looking at ways to encourage participation, such as utilizing automatic enrollment and automatic escalation. Workers have a role to participate and save in 401(k) plans when they are given the opportunity to do so. In addition, both employers and workers have a role in preserving retirement savings. Government policy makers have an important role in setting the condition and the appropriate incentives that both encourage desired savings behavior but also protects participants. Recent government action that has helped enhance participation in 401(k) plans is a good first step. But action is still needed to improve disclosure on fees, especially those that are hidden, and measures need to be taken to discourage leakage. As this Committee and others move forward to address these issues, improvements may be made to 401(k) plans that can help assure that savings in such plans are an important part of individuals’ secure retirement. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further questions about this statement, please contact Barbara D. Bovbjerg at (202) 512-7215 or bovbjergb@gao.gov. Individuals making key contributions to this statement included Tamara Cross, David Lehrer, Joseph Applebaum, James Bennett, Jennifer Gregory, Angela Jacobs, Jessica Orr, and Craig Winslow. Retirement Savings: Automatic Enrollment Shows Promise for Some Workers, but Proposals to Broaden Retirement Savings for Other Workers Could Face Challenges. GAO-10-31. Washington, D.C.: October 23, 2009. Retirement Savings: Better Information and Sponsor Guidance Could Improve Oversight and Reduce Fees for Participants. GAO-09-641. Washington, D.C.: September 4, 2009. 401(k) Plans: Policy Changes Could Reduce the Long-term Effects of Leakage on Workers’ Retirement Savings. GAO-09-715. Washington, D.C.: August 28, 2009. Private Pensions: Alternative Approaches Could Address Retirement Risks Faced by Workers but Pose Trade-offs. GAO-09-642. Washington, D.C.: July 24, 2009. Private Pensions: Conflicts of Interest Can Affect Defined Benefit and Defined Contribution Plans. GAO-09-503T. Washington, D.C.: March 24, 2009. Private Pensions: Fulfilling Fiduciary Obligations Can Present Challenges for 401(k) Plan Sponsors. GAO-08-774. Washington D.C.: July 16, 2008. Private Pensions: GAO Survey of 401(k) Plan Sponsor Practices (GAO-08-870SP, July 2008), an E-supplement to GAO-08-774. GAO-08-870SP. Washington, D.C.: July 16, 2008. Private Pensions: Low Defined Contribution Plan Savings May Pose Challenges to Retirement Security, Especially for Many Low-Income Workers. GAO-08-8. Washington, D.C.: November 29, 2007. Private Pensions: Information That Sponsors and Participants Need to Understand 401(k) Plan Fees. GAO-08-222T. Washington, D.C.: October 30, 2007. Private Pensions: 401(k) Plan Participants and Sponsors Need Better Information on Fees. GAO-08-95T. Washington, D.C.: October 24, 2007. Employer-Sponsored Health and Retirement Benefits: Efforts to Control Employer Costs and the Implications for Workers. GAO-07-355. Washington, D.C.: March 30, 2007. Private Pensions: Increased Reliance on 401(k) Plans Calls for Better Information on Fees. GAO-07-530T. Washington, D.C.: March 6, 2007. Employee Benefits Security Administration: Enforcement Improvements Made but Additional Actions Could Further Enhance Pension Plan Oversight. GAO-07-22. Washington, D.C.: January 18, 2007. Private Pensions: Changes Needed to Provide 401(k) Plan Participants and the Department of Labor Better Information on Fees. GAO-07-21. Washington, D.C.: November 16, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the past 25 years, the number of defined benefit (DB) plans has declined while the number of defined contribution (DC) plans has increased. Today, DC plans are the dominant type of employer-sponsored retirement plans, with more than 49 million U.S. workers participating in them. 401(k) plans currently cover over 85 percent of active DC plan participants and are the fastest growing type of employer-sponsored pension plan. Given these shifts in pension coverage, workers are increasingly relying on 401(k) plans for their pension income. Recently, policy makers have focused attention on the ability of 401(k) plans to provide participants with adequate retirement income and the challenges that arise as 401(k) plans become the predominant retirement savings plan for employees. As a result, GAO was asked to report on (1) challenges to building and maintaining of savings in 401(k) plans, and (2) recent measures to improve 401(k) participation and savings levels. There are challenges to building and saving through 401(k) plans. While low participation rates may be due, in part, to the fact that some workers participate in DB plans, there is also a large portion of workers who do not have access to an employer-sponsored retirement plan, as well as some who do not enroll in such a plan when an employer offers it. We found that for those who did participate, their overall balances were low, particularly for low-income and older workers who either did not have the means to save or have not had the opportunity to save in 401(k)s for much of their working lifetimes. There are also challenges workers face in maintaining savings in 401(k) plans. For example, 401(k) leakage--actions participants take that reduce the savings they have accumulated, such as borrowing from the account, taking hardship withdrawals, or cashing out the account when they change jobs--continues to affect retirement savings and increases the risk that 401(k) plans may yield insufficient retirement income for individual participants. Further, various fees, such as investment and other hidden fees, can erode retirement savings and individuals may not be aware of their impact. Automatic enrollment of employees in 401(k) plans is one measure to increase participation rates and saving. Under automatic enrollment, which was encouraged by the Pension Protection Act of 2006 and recent regulatory changes, employers enroll workers into plans automatically unless they explicitly choose to opt out. Plan sponsors are increasingly adopting automatic enrollment policies, which can considerably increase participation rates, with some plans' rates reaching as high as 95 percent. Employers can also set default contribution rates and investment funds. Though target-date funds are a common type of default investment fund, there are concerns about their risks, particularly for participants nearing retirement.
Various laws and directives guide DHS’s role in critical infrastructure protection, including the Homeland Security Act of 2002, as amended, the Homeland Security Presidential Directive/HSPD-7, and most recently, Presidential Policy Directive/PPD-21, which was issued on February 12, 2013. Consistent with HSPD-7, which directed DHS to establish uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities within and across CI sectors, 18 CI sectors were established. PPD-21, among other things, purports to refine and clarify critical infrastructure-related functions, roles, and responsibilities across the federal government, and enhance overall coordination and collaboration. Pursuant to PPD-21, which expressly revoked HSPD-7, 2 of the 18 sectors were incorporated into existing sectors, thereby reducing the number of CI sectors from 18 to 16 (app. I lists the CI sectors and their SSAs). PPD-21 directs DHS to, among other things, coordinate the overall federal effort to promote the security and resilience of the nation’s critical infrastructure. PPD-21 also recognizes that DHS, in carrying out its responsibilities under the Homeland Security Act, evaluates national capabilities, opportunities, and challenges in protecting critical infrastructure; analyzes threats to, vulnerabilities of, and potential consequences from all hazards on critical infrastructure; identifies security and resilience functions that are necessary for effective public-private engagement with all critical infrastructure sectors; and integrates and coordinates federal cross-sector security and resilience activities and identify and analyze key interdependencies among critical infrastructure sectors. Within DHS, NPPD’s IP is responsible for various activities intended to enhance CI protection and resilience across a number of sectors. While other entities may possess and exercise regulatory authority over CI to address security, such as for the chemical, transportation, and nuclear sectors, IP generally relies on voluntary efforts to secure CI because, in general, DHS has limited authority to directly regulate CI. In carrying out its responsibilities, IP leads and coordinates national programs and policies on critical infrastructure issues and, among other things, conducts and facilitates security surveys and vulnerability assessments to help CI owners and operators and state, local, tribal, and territorial partners understand and address risks. In so doing, IP is responsible for working with public and private sector CI partners in the 16 sectors and leads the coordinated national effort to mitigate risk to the nation’s CI through the development and implementation of CI protection and resilience programs. IP’s Protective Security Coordination Division (PSCD) provides programs and initiatives to enhance CI protection and resilience and reduce risk associated with all-hazards incidents. In so doing, PSCD works with CI owners and operators and state and local responders to (1) assess vulnerabilities, interdependencies, capabilities, and incident consequences; (2) develop, implement, and provide national coordination for protective programs; and (3) facilitate CI response to and recovery from incidents. Related to these efforts, PSCD has deployed 91 PSAs in 50 states and Puerto Rico, with deployment locations based on population density and major concentrations of CI. In these locations, PSAs are to act as the links between state, local, tribal, and territorial organizations and DHS infrastructure mission partners in the private sector and are to assist with ongoing state and local CI security efforts. PSAs are also to support the development of the national risk picture by conducting vulnerability and security assessments to identify security gaps and potential vulnerabilities in the nation’s most critical infrastructures. In addition, PSAs are to share vulnerability information and protective measure suggestions with local partners and asset owners and operators. As discussed earlier, DHS developed the RRAP to assess vulnerability and risk associated with dependent and interdependent infrastructure clusters and systems in specific geographic areas. RRAP projects are intended to evaluate CI on a regional level to identify facilities and sectors that are dependent on one another, or interdependent. RRAP projects also identify situations where failures at facilities or sectors would lead to failures at other facilities or sectors, characteristics that make facilities and regions within the study resilient to disruptions, and resilience vulnerabilities that could promote or foster disruptions. According to DHS officials, the sectors selected to be studied as part of a RRAP project may vary based on priorities of IP and the state(s) where the RRAP occurs, that is, the “sector” focus can be narrow or broad, depending on the concerns of the state. For example, a transportation sector RRAP project in one state focused only on bridges, while another RRAP project in another state examined lifeline sectors. The region or area covered by the RRAP project can also vary substantially. For example, the size of the “region” under study in a RRAP project in Colorado covered a few square miles within a city. Conversely, another RRAP covered an entire industry spread across a large state and yet another RRAP is looking at infrastructure that crosses 12 states. Accordingly, RRAP projects have been conducted in various locations throughout the country covering a wide variety of CI sectors and regions. These RRAP projects include one covering the financial district in Chicago; three covering commercial facilities in cities like Minneapolis, Atlanta, and Las Vegas; and one covering energy production facilities managed by the Tennessee Valley Authority. Figure 1 provides a map showing the states where RRAP projects have been completed or are planned. According to DHS officials, the current process for conducting a RRAP project can take from 18 to 24 months from start to finish. The process includes selecting and scoping RRAP projects from proposals; assembling and preparing a RRAP team of federal, state and local training the states via webinar (i.e., stakeholder awareness training); conducting an introductory kickoff (i.e., outreach) meeting; gathering preliminary data and selecting sites to be included in the scheduling meetings with asset owners or operators of the sites; conducting ongoing analyses using data derived from performing the aforementioned vulnerability and security assessments at facilities;conducting stakeholders’ meetings for training purposes and to discuss regional resilience issues; preparing a draft report for state review; incorporating the state’s feedback into a final report; and establishing a process to follow up with stakeholders to, among other things, periodically update their progress making RRAP-related enhancements. The final RRAP report typically includes a description of the key findings of the vulnerabilities in the sector(s) and region under study, including vulnerabilities for individual facilities, a hazard and risk analysis for the region and sector under review, and an analysis of dependencies and interdependencies. Also included in the RRAP report are resilience enhancement options that provide the report recipient suggestions to address key findings and mitigate the indentified vulnerability or weakness, and a list of organizations or funding sources that could provide the state and other stakeholders with support if they choose to implement an identified resilience enhancement option. RRAP reports can provide insights into the resilience of a region and the sector(s) under review and the gaps that could prompt regional disruptions. Another aspect of the program centers on DHS’s efforts to use RRAP projects to build stakeholder relationships and enhance information sharing and coordination among stakeholders in a particular region. For example, one RRAP report stated that fostering relationships between key facilities and supporting infrastructure providers was necessary to improve response to a hazard or incident. Another RRAP project sought to coordinate a partnership of key players and stakeholders (including both public and private sector stakeholders in the sector of focus and local law enforcement) to improve information sharing necessary for responding to a contamination in the food supply system. According to DHS officials, the creation and continuation of these stakeholder relationships is a major benefit of RRAP projects and the RRAP process. DHS officials said it is often the case that regional CI stakeholders were not acquainted and did not understand how their own operations were related to those of other stakeholders until the RRAP was conducted. For fiscal year 2013, as in past fiscal years, the RRAP does not have a budget line item; rather the costs for the program are funded with resources budgeted for DHS’s vulnerability assessment program and for PSAs. DHS officials estimated that the cost to PSCD for the average RRAP project is currently less than $1 million, including IP assessments, contractor support, and travel and administrative costs. The estimate does not include costs incurred for services rendered by other IP branches that participate in RRAP projects, like IP’s National Infrastructure Simulation and Analysis Center (NISAC), which, among other things, develops computerized simulations of the effect of an all- hazards event on particular geographic areas. The estimate also does not include costs incurred by other SSAs, or the states and localities participating in a RRAP project. PSCD has developed criteria that consider various factors when selecting possible locations and sectors for RRAP projects. PSCD uses the criteria to develop lists of RRAP project candidates, and officials use this list to make final project selections. However, PSCD officials do not fully document why certain project candidates are or are not recommended for selection by the IP Assistant Secretary. IP’s approach for identifying and selecting RRAP projects has evolved since the program’s inception in 2009. For fiscal years 2009 and 2010, IP headquarters officials stated that they identified and selected RRAP project locations and sectors based on IP interests and preferences while considering input from primary stakeholders. IP officials told us that they relied heavily on IP’s interests and preferences because they considered RRAP projects conducted during this time frame as pilot projects. For fiscal years 2011 and 2012, IP officials stated that they refined their process for identifying and selecting RRAP projects to incorporate more input from primary stakeholders. For example, IP officials developed a RRAP project template for PSAs and states to use when jointly developing RRAP project proposals. The template included information on regional characteristics and risk, the willingness of state and facility stakeholders to participate, potential outcomes of the RRAP analysis, and planning and logistical considerations. While considering project proposals states and PSAs jointly developed using the template, IP headquarters officials also developed their own RRAP project proposals (using open source documents for major metropolitan areas) to ensure IP leadership could consider a range of projects across a variety of sectors and locations. IP officials stated that when selecting projects during fiscal years 2011 and 2012, they considered, among other factors, information obtained from the template and, if applicable, risk-based factors such as the concentration of critical infrastructure, and IP management judgment as to the feasibility of conducting the project. More recently, for projects planned to begin in fiscal year 2013, IP took two actions to further revise its RRAP project identification and selection process. First, IP revised its process from that used in previous years by considering only RRAP project proposals submitted jointly by PSAs and states. According to IP officials, they made this change to help ensure that RRAP locations and sectors reflected state priorities, particularly in light of lessons learned from past RRAP projects and feedback from SLTTGCC. In a 2011 report on state and local government CI resilience activities, SLTTGCC expressed, among other things, concern about the scope of RRAP projects—particularly when states did not request the RRAP project—and the cost and resources required to be involved in a RRAP project. Second, IP officials developed nine point selection criteria to identify lists of potential RRAP project candidates. IP officials stated that they developed the criteria to help evaluate proposals and to develop lists of potential candidate projects given the volume of proposals generated by states and PSAs and the DHS resources available to conduct RRAP projects. IP officials told us that they asked PSAs and PSA regional directors who had previously conducted RRAP projects to review the criteria before the criteria were finalized to provide assurance that the criteria reflected lessons learned. Our review of IP criteria shows that it focuses on nine questions in four broad categories: whether the proposed project (1) is feasible, (2) promotes partnering with important stakeholders, (3) will produce results with broad applicability to other locations, and (4) accounts for risk-based factors. These criteria were used to evaluate the RRAP project proposals used to make the fiscal year 2013 and 2014 RRAP project recommendations. Table 1 lists the criteria IP uses to develop a list of feasible RRAP project candidates. A more detailed explanation of these criteria can be found in app. II, table 3. DHS analysts may conduct supplemental research or contact PSAs or state officials to gather additional information. For example, to determine whether the proposed project is likely to produce original key findings and resiliency enhancement options, the analyst may reach out to the PSA and other critical infrastructure stakeholders to see if the state or other organization has initiated similar work to avoid duplicative activities. above are then referred to PSCD officials for further consideration, and PSCD officials select among those candidates to develop a list of recommended projects for approval by the IP Assistant Secretary. Figure 2 depicts IP’s current RRAP proposal and selection process, as of May 2013. According to PSCD officials, the Assistant Secretary for IP selects projects from among those candidates PSCD officials recommend, but PSCD officials did not fully document why specific project candidates were or were not recommended for selection. For fiscal years 2013 and 2014, IP analysts identified 22 project candidates that scored a seven or greater. PSCD officials stated that after further review, they recommended that the Assistant Secretary select 16 of the 22 projects— 10 to be conducted in fiscal year 2013 and 6 to be conducted in fiscal year 2014.10 of PSCD’s recommended project candidates. According to PSCD officials, the Assistant Secretary plans to make final fiscal year 2014 project selections in October 2013. For the 16 projects, IP officials told us For fiscal year 2013, the IP Assistant Secretary selected all they provided the Assistant Secretary information about each of the recommended project candidates. However, PSCD officials did not document why individual projects were recommended over others, including candidate projects that received the same score—they stated that they believe providing such information on the projects that are recommended is sufficient. For example, 1 of the fiscal year 2014 candidate projects recommended to the Assistant Secretary—a health care sector project in New Jersey—had a score of seven. By contrast, 3 other potential candidates—1 food and agriculture sector project in Pennsylvania, a transportation sector project in South Carolina, and a lifeline sector project in the U.S. Virgin Islands—each scored an eight, and none were recommended to the Assistant Secretary for selection. Although PSCD officials did not provide documentation, PSCD officials explained that there can be a variety of reasons why they recommend that the Assistant Secretary select 1 RRAP project over another— including geographic and sector diversity, IP’s strategic priorities, and the availability of PSCD resources. Additionally, PSCD officials provided examples of why some projects were recommended over others. For example, PSCD officials told us that one PSA had submitted three separate proposals, all of which received scores of seven or above, but PSCD recommended only one of the three for selection by the Assistant Secretary because a PSA can participate in only one RRAP at a time. In another case, PSCD officials told us that an international partner for a cross-border transportation project could not participate because of resource constraints. However, without documentation, we were unable to determine why PSCD recommended 1 project candidate that scored a seven over the 3 other potential candidates that scored an eight. Standards for Internal Control in the Federal Government states that all transactions and significant events should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. The standards further call for all transactions and significant events to be clearly documented, and readily available for examination to inform decision making. Recording and documenting key decisions are among the suite of control activities that are an essential part of an agency’s planning, implementing, and reviewing, and they are essential for proper stewardship and accountability for government resources and achieving efficient and effective program results. Documenting the rationale for making project selections would provide DHS managers and others responsible for overseeing the program valuable insights into why 1 RRAP project was selected over another, particularly among proposals with the same score that appear equally feasible and worthy. DHS officials agreed that maintaining this documentation could be used to support the recommendations and help answer any potential questions about final project selections. Maintaining documentation about reasons why projects were or were not selected would also provide DHS a basis for defending its selections or responding to queries about them, particularly given the desirability of the program among the states and budgetary constraints facing states and other potential RRAP stakeholders. Regarding the budgetary constraints, states or other stakeholders, such as local, tribal, or territorial government entities, might be interested in knowing why a RRAP project proposal was not selected so that they could make decisions about (1) whether they need to dedicate additional resources to refining a RRAP proposal for future years, or (2) adjust the scope of their involvement in a future RRAP based on anticipated budgetary resource increases or constraints. With documentation on why projects were or were not recommended and selected, DHS would be better positioned to respond to queries about project selections from potential RRAP stakeholders, particularly if senior managers or staff currently involved in the program move to other positions and new managers or staff do not have records about key decisions. Since 2011, IP has worked with states to improve the RRAP process, and IP officials said these efforts are viewed favorably by primary stakeholders. IP shares the project results of each RRAP with the primary stakeholder, and each report is generally available to IP staff, including PSAs and SSAs within IP, but IP does not share individual reports with others, including other primary stakeholders and SSAs outside of DHS. According to IP officials, IP has begun to conceptualize how it can develop a resilience product or products using multiple sources— including RRAP reports—to distribute to CI partners, and is using various forums to solicit input from CI partners to gauge their resilience information needs. In May 2011, SLTTGCC expressed concerns about states being selected to conduct a RRAP project before first being provided information on the time, cost, and scope of conducting a RRAP project. SLTTGCC established its RRAP Working Group in September 2011 in response to states’ experiences participating in the RRAP in fiscal years 2009 and 2010, with the goal to help ensure that other states had better experiences with DHS in future RRAP projects. In addition, some RRAP project participants we interviewed told us that maintaining the RRAP project schedule had been a challenge. Specifically, officials representing 5 of the 10 primary stakeholders we contacted in locations where RRAP projects had taken place from fiscal years 2009 through 2011 told us that they had encountered challenges completing RRAP projects within a specific time frame. Moreover, 12 of the 20 PSAs we contacted agreed that it was challenging to schedule meetings, such as kickoff meetings that required all key stakeholders to be in the same room during the meetings. Six of these PSAs also said it was challenging to get all required surveys and assessments completed in the short (usually 2 months) data-gathering period. IP officials told us that they took actions to address these challenges by improving communication with participants about the scope of RRAP projects before they were selected and while projects were ongoing. These officials stated that this included setting expectations early on to inform stakeholders when particular RRAP events are scheduled to occur, including scheduling vulnerability assessments, and group discussions among the various stakeholders participating in the RRAP. Officials representing two of the four primary stakeholders that participated in the fiscal year 2009 or 2010 RRAP projects and were active in SLTTGCC stated that they believed IP has improved the conduct of later projects. One of these state officials said including states in the proposal development process and helping states to understand the time, costs, and benefit of the RRAP project prior to initiating the project made the execution of RRAP projects go more smoothly. IP officials told us that that they have since received positive feedback from the states regarding these changes, and our discussions with a representative of SLTTGCC confirmed that they believe that DHS’s revised proposal development process had been beneficial to them. IP shares individual RRAP reports with the primary stakeholders— officials representing the state where the RRAP was conducted—but has generally limited the distribution of the reports to those officials. According to IP, individual RRAP project reports are provided directly to primary stakeholders. PSAs and others that have access to the IP Gateway may also view RRAP reports. When the RRAP report contains Protected Critical Infrastructure Information (PCII), distribution and access to those reports is limited to individuals that are authorized to receive such information. Upon the request of a primary stakeholder, IP will also prepare For Official Use Only (FOUO) versions of RRAP reports—which, although sensitive, may be shared with a broader audience than PCII When this occurs, IP versions—to share with primary stakeholders.develops FOUO and PCII versions of RRAP reports—and primary stakeholders can share FOUO results with whomever they deem appropriate or necessary, including other RRAP participants. Otherwise, to share information within PCII reports, states would need to identify the FOUO information within the PCII report or request that IP clear the recipient for access to PCII information. During our review, 13 of 17 RRAP projects had both PCII and FOUO versions of RRAP reports, the other 4 projects had PCII versions only. IP officials told us that state officials can share FOUO versions of RRAP reports more readily than PCII versions of the reports. Furthermore, PSAs told us they share RRAP-derived information with CI partners—both those who participated in the RRAP and those who did not—during the course of their PSA duties as appropriate. IP officials told us that they do not distribute non- PCII versions of RRAP reports more broadly because the individual state is the primary stakeholder for a particular RRAP report. They said that they consider the state to be the owner of the information and believe that any party who wants the information should go to the state. Officials said they provide point-of-contact information for the primary stakeholder of a particular RRAP project to those who want to request a RRAP report from that primary stakeholder. IP does not proactively distribute RRAP reports to SSAs whose sectors are the focus of the RRAP project. Officials representing the eight of nine SSAs we contacted told us they do not generally receive RRAP reports and may be unaware the reports exist. Representatives of two SSAs stated that they did not know about the existence of certain RRAP reports for their sector, and officials representing two others told us they made multiple requests before receiving RRAP reports from DHS. IP officials stated that SSAs should be able to receive a copy of any RRAP report in which they participated and stated that it was possible that we did not speak to the appropriate SSA representatives—those that participated in the RRAP projects. IP officials also stated that RRAP reports are on the IP Gateway and IP SSAs—chemical, commercial facilities, critical manufacturing, dams, emergency services, and nuclear sectors—have access to these reports, but other SSAs may have to make specific requests to IP or the primary stakeholder in order to receive the RRAP reports because not all of these SSAs have access to the IP Gateway and PCII information. IP officials told us that they intend to share a FOUO copy of a RRAP report on regional energy pipelines with the non-IP SSAs who participated. IP officials stated that the regional energy pipelines RRAP project is not expected to be completed until the latter part of 2013. IP is in the early stages of developing an approach—either a product or a series of products—to share resilience-related lessons learned, but plans are in the early concept stage and few specifics are available regarding the contents of these products. According to IP officials, the planned product or products are not to be limited to RRAP project data or findings. Rather they will leverage RRAP data and common observations or findings; data from security surveys and vulnerability assessments done at individual assets or facilities; and open source information to communicate collective results, lessons learned, and best practices that can contribute to ongoing local, state, regional, and national efforts to strengthen the resilience of critical infrastructure systems. IP officials anticipate that the first product, or products, will be available for distribution before the end of fiscal year 2013. With regard to the planned resilience product(s), IP officials cautioned that (1) this effort is in the conceptual stage, (2) DHS has not approved funding for the product(s), and (3) the product or products are not expected to be ready for distribution until later this year at the earliest. IP officials further stated that it is too early to determine whether this approach will be an effective means to share resilience information across the spectrum of CI partners, to include states and SSAs. Nonetheless, IP officials told us that they engage CI partners, such as SLTTGCC’s RRAP and information-sharing working groups on resilience and information sharing, and during their participation in sector agency meetings and private sector coordination council meetings where, according to officials, the views of SSAs and CI owners and operators are discussed. For example, IP officials said they have had specific discussions with CI partners concerning state resilience information needs, and they are considering this input as they begin to develop a resilience product or products. They said that they also are considering feedback on information needs that they receive at regional conferences attended by various CI partners, and during daily PSA contacts in the field, primarily with CI owners and operators. IP’s efforts to solicit feedback from CI partners during development of any resilience information-sharing product or products is consistent with the NIPP, which states that when the government is provided with an understanding of information needs, it can adjust its information collection, analysis, synthesis, and sharing accordingly. Through outreach and engagement with CI partners, DHS should be better positioned to understand their needs for information about resilience practices. It also helps DHS clarify the scope of work needed to develop a meaningful resilience information-sharing product or products that are useful across sectors and assets, and ascertain how the information can best be disseminated to the various CI partners—issues that could be critical given current budgetary constraints and uncertainty over the availability of resources. PSCD uses follow-up surveys at facilities that have undergone vulnerability assessments and security surveys, including those that participate in RRAP projects, and has initiated a broad data-gathering effort with its RRAP CI stakeholders to explore changes in diverse topics such as partnering and state actions based on RRAP participation. These are important steps to provide insight about RRAP projects, but PSCD faces challenges developing performance measures and is not positioned to gauge the RRAP’s impact on regional resilience. According to the NIPP, the use of performance measures is a critical step in the risk management process to enable DHS to objectively and quantitatively assess improvement in CI protection and resilience at the sector and national levels. The NIPP states that the use of performance metrics provides a basis for DHS to establish accountability, document actual performance, promote effective management, and provide a feedback mechanism to decision makers. IP gathers data from individual facilities, including those that participated in RRAP projects, with the intent of measuring the efforts of those facilities to make enhancements arising out of security surveys and vulnerability assessments performed during RRAP projects. As discussed earlier, PSAs support the development of the national risk picture by conducting vulnerability assessments and security surveys to identify security gaps and potential vulnerabilities in the nation’s most critical infrastructure. PSAs perform these surveys and assessments at individual assets and facilities, including those that participate in RRAP projects, across the 16 sectors. In January 2011, IP directed PSAs to follow up with security survey and vulnerability assessment participants to gather feedback on security and resilience enhancements at their facilities using standardized data collection tools. These follow-up tools were to be used by PSAs to ask asset representatives about enhancements in six general categories—information sharing, security management, security force, protective measures, physical security, and dependencies—and focused on changes made directly as a result of IP security surveys and vulnerability assessments. According to IP officials, PSCD revised its security survey and vulnerability assessment in January 2013 to include additional resilience- related questions intended to focus on facility preparedness, mitigation measures, response capabilities, and recovery mechanisms among facilities that participated in a security survey or vulnerability assessment. In addition, officials said beginning after July 2013, facilities that received a survey or assessment using the revised resilience questions are also to receive a PSA follow-up visit that reflects those same updated questions. IP officials said that revisions to the follow-up tools will also reflect changes associated with security and resilience enhancements at the facility, distinguishing them as either security or resilience changes. Officials said security surveys and vulnerability assessments that were conducted on facilities in support of a RRAP project are noted as such in the IP Gateway, but there is no other additional or separate tracking for the purposes of performance metrics. Furthermore, officials said they continue to gather data on changes initiated at facilities that participated in the RRAP, but they believe it may not be possible to link any changes made at facilities to participation in the RRAP. They added that resilience improvements made at individual facilities do not necessarily address regional vulnerabilities identified in RRAP reports. IP has considered how it intends to measure results associated with RRAP projects—not just facilities within projects— but faces challenges doing so. In January 2012, IP developed a project management plan (PMP) intended to clarify planned performance metrics for IP’s vulnerability assessment programs, including the voluntary security surveys and vulnerability assessments performed during RRAP projects. The PMP stated that DHS planned to measure the impact of RRAP projects by conducting follow-up checks at RRAP facilities to see if these facilities or systems implemented changes that increased the resilience of the facility. The PMP set a goal of 20 percent of facilities making resilience improvements following a security survey or vulnerability assessment performed for RRAP projects for fiscal year 2013, rising to 50 percent of facilities by fiscal year 2017. The PMP stated that this facility information is to be used to compile resilience information for the region, but it did not explain how this information would be combined to measure regional resilience. In April 2013, IP officials told us that they no longer intended to use the performance targets contained in the PMP. IP officials explained that they believe that individual facility assessment follow-ups are not an effective means to measure the impact of a RRAP project. They said that RRAP findings are written for the primary stakeholder—the state and not the assessed facilities—and RRAP projects most often provide the analyses of larger regional issues rather than specific facility gaps. Alternatively, PSCD officials stated that they have since developed the RRAP Findings Tracker to engage primary stakeholders about their efforts to address key findings resulting from individual RRAP projects. According to PSCD officials, in March 2013, the RRAP Findings Tracker was distributed to all PSAs who had conducted a RRAP project over the previous 3 years. PSAs were directed by IP to use the RRAP Findings Tracker to follow up with the state and other stakeholders on specific RRAP issues identified in those states. IP updates the tracker on a monthly basis and headquarters officials are to review the results every 6 months. The RRAP Findings Tracker is intended to cover, among other things: developments that demonstrate project relevance since the RRAP project was initiated, for instance, news reports, speeches, or studies that demonstrate the ongoing relevance of the project’s focus; partnership building and information sharing, to include developments that relate to how project stakeholders—whether state, regional, federal, or private sector—have enhanced interaction, awareness, communication, and information sharing; any action taken concerning the RRAP report’s key findings, particularly with regard to enhancement options specified in the RRAP report; and activities at specific individual assets assessed during the RRAP and their efforts to enhance resilience, including the percentage of assessed assets that have made an improvement or planned to make an improvement after 6 and 12 months. PSCD officials said that they believe that by utilizing the information in its Findings Tracker, they would likely have greater insights into the extent that stakeholders take action following a RRAP project, such as the extent to which the project has improved communication among RRAP stakeholders. According to officials, in May 2013, they began having preliminary discussions about using the RRAP Findings Tracker as one input for developing possible metrics. They added that it would be would be premature for them to provide us with any of the preliminary draft ideas for metrics associated with this effort. Nonetheless, IP officials also stated they face challenges measuring performance across facilities within a RRAP project, and from project to project. For example, IP officials told us that each RRAP project is difficult to measure because each focuses on unique assets within a unique geographic area or region. For example, our reviews of RRAP reports showed one RRAP project might focus on commercial facilities, such as stadiums and arenas in one urban area, while another project might focus on a shopping district or an urban mall in another. Similarly, a transportation RRAP project in one region may focus on roadways and bridges, while a project in a different region might focus on waterways. IP officials added that participation in a RRAP project is voluntary, as is participation in the completion of the RRAP Findings Tracker. Therefore, the ability to develop measures that represent assets in a region could hinge on the willingness of CI stakeholders, including facility owners and operators, to participate. IP officials further explained that, given the diversity of assets and regions covered by individual RRAP projects, it could also be challenging to link key RRAP findings and subsequent actions within projects. For example, one RRAP project may identify a planning shortfall, leading to a resilience-enhancing option calling for the creation of a plan. If the affected stakeholder or stakeholders subsequently create such a plan, IP could note that an action or actions were taken toward addressing a key finding, but it would be unable to assess whether the plan addresses the key finding adequately until it was implemented and tested through an exercise or real-world emergency. Reaching that next step may take years, according to officials. Officials also stated that it might be difficult to develop measures of key findings across RRAP projects. Whereas a key finding of one RRAP project might focus on the development of a regional plan as discussed above, a key finding of another might focus on prioritizing the distribution on resources, such as fuel, to ensure that emergency services can remain viable during a hurricane or earthquake. A separate RRAP project might have a key finding that electrical power is provided by single supplier, leaving a region vulnerable to a single point of failure. We recognize that developing performance measures among and across RRAP projects could be challenging moving forward. We further recognize that the information generated through the administration of the RRAP Findings Tracker with RRAP project primary stakeholders (e.g., states) may provide a foundation for DHS’s development of RRAP performance measures. However, DHS could better position itself to gain insights into a project’s effects if it were to develop a mechanism to assess whether changes made at individual facilities are linked to or influenced by participation in a RRAP project. One approach for doing so could entail IP revising its security survey and vulnerability assessment follow-up process at individual facilities, including follow-ups at facilities that participated in RRAP projects to gather and analyze data on the extent to which participation in a RRAP project influenced owners and operators to make related resilience enhancements. More specifically, IP officials stated earlier that they did believe it was possible to link security and resilience enhancements made at facilities that participated in RRAP projects to RRAP project participation. However, currently the PSA does not specifically ask facility owners and operators whether participation in the RRAP project influenced their enhancement decisions. Developing a mechanism—such as revising the security survey and vulnerability assessment follow-up tool—to ascertain whether changes made at individual facilities are linked to or influenced by findings in RRAP projects could provide IP valuable information on individual facility efforts to address key RRAP project findings and how any enhancements are linked to the RRAP project. Doing so would also enable IP to compare facilities that participated in a RRAP project with those that did not and provide a basis for assessing why RRAP participation may or may not have prompted changes at a facility, thereby providing a building block for measuring IP’s performance and insights into the effect a RRAP project may have on facility resilience. This would also be consistent with the NIPP, which states that the use of performance metrics provides a basis for DHS to establish accountability, document actual performance, promote effective management, and provide feedback to decision makers. Gathering data on the extent to which participation in a RRAP project influenced facility enhancements might also provide DHS valuable information about the results of its efforts, consistent with the views of PSAs who coordinate RRAP projects among stakeholders in particular regions. For example, 6 of the 10 PSAs we interviewed who had participated in RRAP projects where RRAP reports were issued expressed the belief that facilities that participated in the RRAP are more likely to have made improvements that increased security or resilience than other facilities that were not part of a RRAP project, but had undergone a security survey or assessment. These PSAs said that they believed this would occur because facilities participating in RRAP projects are able to see how their own operations affect the security and resilience of other facilities within the region. IP officials stated that they agreed that understanding whether RRAP participation had an effect on whether enhancements were made at an individual facility could provide useful information to the program. By assessing the linkage between the actions of individual facilities and the results of a RRAP project, DHS would also have a basis to begin to explore the effect of a RRAP project on facility management and operations, especially since RRAP projects are intended to focus on dependencies and interdependencies among facilities in a particular region. IP has taken important actions to standardize the selection process for RRAP project locations. It has also worked with state stakeholders to better communicate the scope of projects, consider how it can share resilience information with CI partners, and gather information on CI partner actions to enhance resilience after the RRAP project is completed. However, further actions could strengthen these endeavors. First, with regard to the process for selecting RRAP project locations, IP has developed criteria and a process for selecting project candidates, but it has not fully documented why some projects are recommended over others. Documenting why specific RRAP selections were or were not recommended would be consistent with Standards for Internal Control in the Federal Government, and would provide IP managers and others responsible for overseeing the program valuable insights into why one RRAP project was selected over another, particularly among proposals with the same score that appear equally feasible and worthy. Furthermore, maintaining documentation about reasons why projects were or were not recommended would also provide IP a basis for defending its selections or responding to queries about them, particularly given the desirability of the program among the states and budgetary constraints facing states and other potential RRAP stakeholders. With documentation on why projects were or were not recommended and selected, DHS would be better positioned to respond to queries about project selections from potential RRAP stakeholders, particularly if senior managers or staff currently involved in the program move to other positions and new managers or staff do not have records about key decisions. Second, consistent with the NIPP, IP has taken action to establish an approach for conducting follow-up surveys at facilities that have undergone security surveys and vulnerability assessments—both those that participated in RRAP projects and those that did not—to document changes the facilities make that affect their resilience. Also, IP has taken preliminary steps, via its RRAP Findings Tracker, to gain insights into primary stakeholder efforts to enhance resilience in the regions where RRAP projects have been performed. We recognize that IP faces challenges developing performance measures to gauge results among and across RRAP projects; nevertheless, IP could benefit from assessing how participation in a RRAP project may or may not influence change. Specifically, although the RRAP Findings Tracker may provide a foundation for IP’s overall development of RRAP performance measures, IP could develop a mechanism to assess whether changes made at individual facilities are linked to or influenced by participation in a RRAP project. One such mechanism could entail IP revising its security survey and vulnerability assessment follow-up tool, which is used to query all facilities that have participated in these surveys and assessments— regardless of whether they participated in a RRAP project. Doing so would enable IP to compare the extent to which facilities that participated in a RRAP project made enhancements related to DHS security surveys and assessments with those that did not participate in a RRAP project. This comparison could serve as a building block for measuring IP’s efforts to conduct RRAP projects, thereby providing an avenue to use performance metrics to establish accountability, document actual performance, promote effective management, and provide feedback to decision makers as stated in the NIPP. It would also provide valuable insights on individual facility efforts to address key RRAP findings, and give IP a basis for determining how those finding may have affected facility resilience, particularly as it relates to facility dependence and interdependence. To help ensure that DHS is taking steps to strengthen the management of RRAP projects and the program in general, we recommend that the Assistant Secretary for Infrastructure Protection, Department of Homeland Security, take the following two actions: document decisions made with regard to recommendations about individual RRAP projects to provide insights into why one project was recommended over another and assurance that recommendations among equally feasible proposals are defensible, and develop a mechanism to assess the extent to which individual projects influenced participants to make RRAP related enhancements, such as revising the security and vulnerability assessment follow-up tool to query facilities that participated in RRAP projects on the extent to which any resilience improvements made are due to participation in the RRAP. We provided a draft of this report to the Secretary of Homeland Security for review and comment. DHS provided written comments, which are summarized below and reprinted in appendix III. DHS agreed with both recommendations and discussed plans to address one of them. DHS also provided technical comments, which we incorporated as appropriate. With regard to the first recommendation, that DHS document decisions made with regard to recommendations about individual projects, DHS concurred, stating that the Office of Infrastructure Protection (IP) will develop a mechanism to more comprehensively document the decision- making process and justifications that lead to the selection of each project. DHS stated that it estimates that it will complete this action as of September 30, 2014, for projects in the next RRAP cycle—that is, projects to be conducted in fiscal year 2015. With regard to the second recommendation, that DHS develop a mechanism, such as revising the security survey and vulnerability assessment follow-up tool, to assess the extent to which individual projects influenced participants to make RRAP related enhancements, DHS also concurred. In its written comments, DHS agreed that it would be insightful to understand whether the implementation rate of security and resilience enhancements at facilities differs between those receiving an assessment as part of a RRAP, and those receiving an assessment unrelated to this program. After we provided a draft of this report to DHS for review and comment, IP officials raised concerns that the recommendation as originally worded did not provide them the flexibility they needed to consider multiple alternatives to gain insights about RRAP-related enhancements. For example, and as noted in the written comments, facilities participate in the RRAP in many ways and surveys and assessments are but one option offered to facilities in a focus area. While we continue to see benefits to revising the security survey and vulnerability assessment follow-up tool, as discussed in the report, we modified the recommendation to acknowledge IP’s concerns about considering other possible mechanisms. In its written comments, DHS stated that IP would review alternatives, including the one we discussed, and would provide additional details on how it will address this recommendation in DHS’s written statement of the actions taken on our recommendations 60 calendar days after the receipt of the final report.DHS stated that its estimated completion date for action on this recommendation is to be determined. DHS also raised two concerns with the report. First, while concurring with our second recommendation, DHS stated that it is disappointed that the draft report did not have a more extensive discussion on the overall success and effectiveness of the RRAP to identify and address regional security and resilience gaps. DHS noted that since the RRAP’s inception, projects have been conducted in regions throughout the nation and have focused on sectors such as energy, transportation, commercial facilities, water, and food and agriculture. DHS stated that through the RRAP, DHS has provided unique technical expertise to its stakeholders that helps guide their strategic investments in equipment, planning, training, and resources to enhance the resilience and protection of facilities, surrounding communities, and entire regions. We believe that the report did address these issues sufficiently. As noted in the report, IP has taken important actions to (1) standardize the selection process for RRAP project locations, (2) work with state stakeholders to better communicate the scope of projects and consider how it can share resilience information with CI partners, and (3) gather information on CI partner actions to enhance resilience after the RRAP project is completed. Nonetheless, the NIPP states that the use of performance measures is a critical step in the risk management process to enable DHS to objectively and quantitatively assess improvements in CI protection and provides a basis for DHS to document actual performance, promote effective management, and provide a feedback mechanism to decision makers. As discussed in the report, developing performance measures among and across RRAP projects could be challenging moving forward, but, absent these measures, neither we nor DHS is positioned to report on the overall success and effectiveness of the program. Hence, we recommended the development of such a mechanism to assess RRAP-related enhancements. Second, DHS stated that the draft report did not substantially discuss the significant evolution of the program from a 2009 pilot to a more mature program that is at the forefront of the evolving critical infrastructure security and resilience mission that is responsive to the needs of the federal government and its partners. We disagree and believe that the report sufficiently discusses the evolution of the program, particularly the evolution of DHS’s process for selecting project locations as well as changes DHS has made to address the concerns of stakeholders based on their early experiences with RRAP. We are sending copies of this report to the Secretary of Homeland Security, the Under Secretary for the National Protection Programs Directorate, and interested congressional committees. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-8777 or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. This appendix provides information on the 16 critical infrastructure (CI) sectors and the federal agencies responsible for sector security. The National Infrastructure Protection Plan (NIPP) outlines the roles and responsibilities of the Department of Homeland Security (DHS) and its partners—including other federal agencies. Within the NIPP framework, DHS is responsible for leading and coordinating the overall national effort to enhance protection via 16 critical infrastructure sectors. The NIPP and Presidential Decision Directive/PPD-21 assign responsibility for critical infrastructure sectors to sector-specific agencies (SSA). As an SSA, DHS has direct responsibility for leading, integrating, and coordinating efforts of sector partners to protect 10 of the 16 critical infrastructure sectors. The remaining six sectors are coordinated by seven other federal agencies. Table 2 lists the SSAs and their sectors. This appendix provides the criteria DHS’s Office of Infrastructure Protection (IP) uses to assess RRAP proposals for consideration for selection as RRAP projects. IP officials stated that the criteria were developed based on feedback received from infrastructure protection partners such as the State, Local, Tribal and Territorial Government Coordinating Council and from lessons learned conducting RRAP projects. IP officials said that they asked protective security advisors (PSA) and PSA regional directors who had previously conducted Regional Resilience Assessment Program (RRAP) projects to review the criteria before they were finalized to provide assurance that the criteria reflected lessons learned. As shown in table 3, our review of IP’s criteria shows that they generally focus on the feasibility of the overall proposed project; partnering, such as whether the project has clear sponsorship and willing participants; broad applicability, such as the potential to generate resilience-related findings that can be applied to other locations; and risk-based factors, including the concentration of critical infrastructure in the region and the likelihood that the project will produce resilience- related findings. In addition to the contact named above, John F. Mortin, Assistant Director, and Anthony J. DeFrank, Analyst-in-Charge, managed this assignment. Chuck Bausell, Orlando Copeland, Katherine M. Davis, Justin Dunleavy, Aryn Ehlow, Michele C. Fejfar, Eric Hauswirth, and Thomas F. Lombardi made significant contributions to the work. Critical Infrastructure Protection: DHS List of Priority Assets Needs to Be Validated and Reported to Congress. GAO-13-296. Washington, D.C.: March 25, 2013. Critical Infrastructure Protection: Preliminary Observations on DHS Efforts to Assess Chemical Security Risk and Gather Feedback on Facility Outreach. GAO-13-412T. Washington, D.C.: March 14, 2013. Critical Infrastructure Protection: An Implementation Strategy Could Advance DHS’s Coordination of Resilience Efforts across Ports and Other Infrastructure. GAO-13-11. Washington, D.C.: October 25, 2012. Critical Infrastructure Protection: Summary of DHS Actions to Better Manage Its Chemical Security Program. GAO-12-1044T. Washington, D.C.: September 20, 2012. Critical Infrastructure Protection: DHS Is Taking Action to Better Manage Its Chemical Security Program, but It Is Too Early to Assess Results. GAO-12-567T. Washington, D.C.: September 11, 2012. Critical Infrastructure: DHS Needs to Refocus Its Efforts to Lead the Government Facilities Sector. GAO-12-852. Washington, D.C.: August 13, 2012. Critical Infrastructure Protection: DHS Is Taking Action to Better Manage Its Chemical Security Program, but It Is Too Early to Assess Results. GAO-12-515T. Washington, D.C.: July 26, 2012. Critical Infrastructure Protection: DHS Could Better Manage Security Surveys and Vulnerability Assessments. GAO-12-378. Washington, D.C.: May 31, 2012. Critical Infrastructure Protection: DHS Has Taken Action Designed to Identify and Address Overlaps and Gaps in Critical Infrastructure Security Activities. GAO-11-537R. Washington, D.C.: May 19, 2011. Critical Infrastructure Protection: DHS Efforts to Assess and Promote Resiliency Are Evolving but Program Management Could Be Strengthened. GAO-10-772. Washington, D.C.: September 23, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. The Department of Homeland Security’s (DHS) Critical Infrastructure Protection Cost-Benefit Report. GAO-09-654R. Washington, D.C.: June 26, 2009. Information Technology: Federal Laws, Regulations, and Mandatory Standards to Securing Private Sector Information Technology Systems and Data in Critical Infrastructure Sectors. GAO-08-1075R. Washington, D.C.: September 16, 2008. Risk Management: Strengthening the Use of Risk Management Principles in Homeland Security. GAO-08-904T. Washington, D.C.: June 25, 2008. Critical Infrastructure: Sector Plans Complete and Sector Councils Evolving. GAO-07-1075T. Washington, D.C.: July 12, 2007. Critical Infrastructure Protection: Sector Plans and Sector Councils Continue to Evolve. GAO-07-706R. Washington, D.C.: July 10, 2007. Critical Infrastructure: Challenges Remain in Protecting Key Sectors. GAO-07-626T. Washington, D.C.: March 20, 2007. Homeland Security: Progress Has Been Made to Address the Vulnerabilities Exposed by 9/11, but Continued Federal Action Is Needed to Further Mitigate Security Risks. GAO-07-375. Washington, D.C.: January 24, 2007. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors’ Characteristics. GAO-07-39. Washington, D.C.: October 16, 2006. Information Sharing: DHS Should Take Steps to Encourage More Widespread Use of Its Program to Protect and Share Critical Infrastructure Information. GAO-06-383. Washington, D.C.: April 17, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005.
In October 2012, Hurricane Sandy caused widespread damage across multiple states. Further, threats to CI are not limited to natural disasters, as demonstrated by the terrorist attacks of September 11, 2001. In 2009, DHS initiated the RRAP, a voluntary program intended to assess regional resilience of CI. RRAP projects are to analyze a region's ability to adapt to changing conditions, and prepare for, withstand, and rapidly recover from disruptions. GAO was asked to examine DHS's efforts to manage the program. GAO assessed the extent to which DHS (1) developed criteria for identifying RRAP project locations, (2) worked with states to conduct RRAP projects and share information with CI partners to promote resilience, and (3) is positioned to measure results associated with RRAP projects. GAO reviewed applicable laws, DHS policies and procedures, and all 17 RRAP reports completed since the program inception in 2009. GAO also interviewed officials from 10 states with issued RRAP reports, DHS officials who conducted 20 RRAP projects from 2009 through 2012, and other federal officials representing nine departments and agencies involved in RRAP projects. While the results of the interviews are not generalizable, they provided insight. The Department of Homeland Security (DHS) has developed nine criteria that consider various factors--including the willingness of various stakeholders, such as asset owners and operators, to participate and concentrations of high-risk critical infrastructure--when identifying possible locations for Regional Resiliency Assessment Program (RRAP) projects. According to DHS officials, final project selections are then made from a list of possible locations based on factors including geographic distribution and DHS priorities, among other considerations. However, it is unclear why some RRAP projects are recommended over others because DHS does not fully document why these decision are made. Federal internal control standards call for agencies to promptly record and clearly document transactions and significant events. Because DHS's selection process identifies a greater number of potential projects than DHS has the resources to perform, documenting why final selections are made would help ensure accountability, enabling DHS to provide evidence of its decision making. DHS has worked with states to improve the process for conducting RRAP projects and is considering an approach for sharing resilience information with its critical infrastructure (CI) partners, including federal, state, local, and tribal officials. Since 2011, DHS has worked with states to improve the process for conducting RRAP projects, including more clearly defining the scope of projects. According to DHS officials, these efforts have been viewed favorably by states. DHS is currently considering an approach to more widely share resilience lessons learned with its CI partners, including a possible resiliency product or products that draw from completed RRAP projects. DHS officials stated that they engage CI partners in meetings and conferences where partners' resilience information needs are discussed and have been incorporating this input into their efforts to develop a resilience information sharing approach. DHS has taken action to measure efforts to enhance security and resilience among facilities that participate in the RRAP, but faces challenges measuring results associated with RRAP projects. DHS performs security and vulnerability assessments at individual CI assets that participate in RRAPs projects as well as those that do not participate. Consistent with the National Infrastructure Protection Plan, DHS also performs periodic follow-ups among asset owners and operators that participate in these assessments with the intent of measuring their efforts to make enhancements arising out of these surveys and assessments. However, DHS does not measure how enhancements made at individual assets that participate in a RRAP project contribute to the overall results of the project. DHS officials stated that they face challenges measuring performance within and across RRAP projects because of the unique characteristics of each, including geographic diversity and differences among assets within projects. GAO recognizes that measuring performance within and among RRAP projects could be challenging, but DHS could better position itself to gain insights into projects' effects if it were to develop a mechanism to compare facilities that have participated in a RRAP project with those that have not, thus establishing building blocks for measuring its efforts to conduct RRAP projects. One approach could entail using DHS's assessment follow-up process to gather and analyze data to assess whether participation in a RRAP project influenced owners and operators to make related resilience enhancements. GAO recommends that DHS document final RRAP selections and develop a mechanism to measure whether RRAP participation influences facilities to make RRAP-related enhancements. DHS concurred with the recommendations.
As has been reported by many researchers, some Gulf War veterans developed illnesses that could not be diagnosed or defined and for which other causes could not be specifically identified. These illnesses have been attributed to many sources, including a large number of unusual environmental hazards found in the Gulf. The Congress enacted the Persian Gulf War Veterans’ Benefits Act (P.L. 103-446, Nov. 2, 1994) which, among other things, allowed VA to pay disability compensation to veterans suffering from undiagnosed illnesses attributed to their service in the Persian Gulf. Compensable conditions include but are not limited to abnormal weight loss, cardiovascular symptoms, fatigue, gastrointestinal symptoms, headaches, joint and muscle pains, menstrual disorders, neurologic symptoms, neuropsychological symptoms, skin disorders, respiratory disorders, and sleep disturbances. Under the procedures that VA established to process undiagnosed illness claims, veterans submit completed claim forms to a VA regional office (VARO). Each VARO is responsible for fully developing the claims. VAROs obtain medical records from the military services; arrange for a VA medical examination; and obtain evidence from other sources, such as private health care providers or knowledgeable lay persons, if the veteran identifies such sources. Once the claim is developed, the claims file is transferred to one of the four area processing offices that VA has designated for processing undiagnosed illness claims. As mentioned earlier, over 700,000 men and women served in the Persian Gulf War. VA reported that as of February 1996, it has processed 7,845 undiagnosed illness claims and has identified an additional 6,655 claims that are being evaluated for undiagnosed illnesses. Of the processed claims, VA had denied compensation for undiagnosed illness to 7,424 veterans—a denial rate of 95 percent. In February 1995, VA issued a regulation (38 C.F.R. 3.317) that specifies the evidence required before compensation can be paid for an undiagnosed illness claim. Under the regulation, veterans must provide objective indications of a chronic disability. Objective indications include both signs—evidence perceptible to the examining physician—and other nonmedical indications that are capable of independent verification. In the final rule, VA explained that nonmedical indicators of a disabling illness include but are not limited to such circumstances or events as (1) time lost from work; (2) evidence that a veteran has sought medical treatment for his or her symptoms; and (3) evidence affirming changes in the veteran’s appearance, physical abilities, or mental or emotional attitude. The evidence requirements contained in the regulation are consistent with the Joint Explanatory Statement that accompanied the Veterans’ Benefits Improvements Act of 1994. According to the VA regulation, a veteran can only be compensated for disabilities caused by undiagnosed illnesses that (1) manifest themselves during service in the Gulf War or (2) arise within 2 years of a veteran’s departure from the Persian Gulf. If the illness arose after the veteran left the Gulf, the veteran must be at least 10-percent disabled to be compensated. In addition, the veteran must demonstrate that the disabling condition is chronic—present for 6 months or longer. In some cases, lay statements can provide critical support for a veteran’s undiagnosed illness claim. As stated in the VA claims processing manual, lay statements may be especially important in cases where an undiagnosed illness is manifest solely by symptoms that the veteran reports and that would, therefore, not be subject to verification through medical examination. Examples of such symptoms include headaches and fatigue. According to VA, lay statements from individuals who establish that they are able from personal experience to make their observations or statements will be considered as evidence if they support the conclusion that a disability exists. While veterans are ultimately responsible for proving their claims, VA is required by statute to assist the veteran in developing facts to prove the claim. The U.S. Court of Veterans’ Appeals has also held in its decisions that VA has a duty to assist veterans with proving their claims and is required to obtain relevant facts from sources identified by claimants. A VA letter dated February 15, 1995, instructed all VA regional offices that “if a veteran alleges that a disability began after military service, request objective evidence (lay or medical) to establish that fact.” Many types of evidence can be used to support undiagnosed illness claims. The denied claims that we reviewed contained primarily service medical records and VA medical examinations. About 15 percent of the claims included medical records from private physicians seen by the veterans after leaving military service and less than 3 percent contained nonmedical evidence related to an undiagnosed illness, such as lay statements and records showing lost time from work. The granted claims that we reviewed also contained primarily service medical records and VA examinations. In these cases, however, veterans were usually able to provide VA with a history, after leaving the Persian Gulf, of treatment for the granted undiagnosed condition. Some granted claims were supported with nonmedical evidence, such as a sworn statement from an individual with knowledge of the veteran’s disability. Many of the veterans evaluated for undiagnosed illnesses are also examined for other diagnosable service-connected illnesses and injuries. While VA does not often grant compensation for undiagnosed conditions, these veterans often receive compensation for diagnosable injuries or illnesses. Of the cases that we reviewed where the claimed undiagnosed illness(es) had been denied, about 60 percent of the veterans had been granted compensation for at least one service-connected diagnosable condition, such as hypertension, hearing loss, or knee disorders. About one-half of these veterans were granted a disability payment; the remainder, with minor impairments, are eligible for free care for their conditions through the VA medical system. The lack of evidence to support undiagnosed illness claims may in part be the result of poor VA procedures to elicit such information, as the following examples indicate. In late 1995, VA’s central office conducted a review of 203 completed undiagnosed illness claims. VA found that additional specialty examinations should have been ordered in 23 cases (about 11 percent). At the time of our work, VA stated that the required examinations would be scheduled and the veterans’ cases would be reconsidered based on the additional evidence. In 5 of the 79 denied cases that we reviewed, VA had not requested records from physicians who had treated the veteran since leaving military service. For one case, VA officials stated that an attempt was made to obtain the evidence but the doctor failed to respond. In three cases officials stated that the medical records were not obtained due to error. According to area processing office officials, private medical records were not obtained in the other case because the veteran visited the doctor after the presumptive period. Although VA recognizes the importance of nonmedical objective evidence—for example, work records and lay statements from knowledgeable individuals—in supporting some undiagnosed illness claims, VA’s standard compensation claim form does not request such evidence. The form does ask veterans to identify individuals who know about the veteran’s medical treatment while in the service; in many cases, however, the claimed undiagnosed illness was not treated in the service. According to VA officials, the form was designed to obtain evidence about typical illnesses and injuries that usually occur while veterans are in the service as opposed to Persian Gulf illnesses that can become manifest after veterans leave military service. While the VA form does not specifically request nonmedical information, about 15 percent of the veterans did provide VA with the names of individuals who were knowledgeable about their claimed illness. However, VA did not obtain statements from these individuals. Officials at the area processing offices cited several reasons why lay statements were not obtained or used. These reasons include the veteran’s failure to provide a complete address for the knowledgeable individual and that the evidence fell outside the presumptive period. In one case, an area processing office official stated that VA should have obtained the statements. While the head of the claims processing unit at one area processing office questioned the value of lay statements and whether VA was responsible for obtaining them, VA central office officials acknowledged that VA was responsible for obtaining lay statements and a central office official told us that statements would be obtained for the cases that we identified and that the claims would be reconsidered after the statements were obtained. After the Congress passed legislation allowing compensation for undiagnosed illnesses, VA reexamined all completed Gulf War claim files to determine if compensation was warranted. In some of these cases that we reviewed, there was no indication that VA had informed the veteran after the legislation about the specific types of medical and nonmedical evidence that could be submitted to support the claim. According to VA officials, VA had decided to provide this information to the veterans on a case-by-case basis. VA’s central office acknowledged that the existing procedures to develop undiagnosed illness claims are not adequate and that area processing offices could do a better job of requesting both medical and nonmedical evidence from veterans in support of undiagnosed illness claims. VA has taken a step to provide better information to veterans regarding evidence to support undiagnosed illness claims. VA has developed a letter that clearly states the types of medical and nonmedical evidence that can be used to support these claims. VA is now sending this letter to all veterans who file undiagnosed illness claims. In the denied cases that we reviewed, even when VA followed all appropriate procedures to develop claims, the veterans did not always provide the necessary evidence that would allow their claims to be granted. Only 30 percent of the veterans in the denied cases that we reviewed provided evidence that they had sought medical treatment for the claimed undiagnosed condition after leaving the service—some veterans said that they could not afford medical treatment from private providers while others indicated that they were too busy to see a physician. About 40 percent of the veterans in the denied cases that we reviewed were informed that their denied undiagnosed illness claims would be reconsidered if additional evidence was submitted; and VA thoroughly described the evidence that would be acceptable. However, only 4 percent of these cases included any additional information from the veteran. Twenty-three percent of the veterans in the denied cases that we reviewed did not show up for all the scheduled examinations. As a result, VA was unable to identify and thoroughly evaluate the claimed disabling conditions. VA does not always correctly categorize the reason undiagnosed illness claims were denied. VA requires each of its area processing offices to record the reason that undiagnosed illness claims were denied. Reported results are compiled and presented periodically to the Congress. According to VA, most claims are denied because the claimed disability did not become manifest on active duty in the Persian Gulf or during the 2-year presumptive period. Table 1 shows the latest data submitted by VA. Of the denied claims that we reviewed, most—68 percent—had been categorized by VA as being denied because the claimed illness did not become manifest on active duty or during the presumptive period. However, in most of these cases, VA had explained in its decision letter to the veteran that insufficient evidence was presented to demonstrate that the claimed conditions existed, was chronic, or was disabling to a compensable degree of 10 percent or more. By failing to appropriately categorize denied claims, VA may be creating the impression that many veterans with otherwise compensable disabilities do not receive benefits solely as a result of the presumptive period. Our review suggests that if the presumptive period was extended, VA would still be required to deny the claims unless the veteran provided additional evidence regarding the chronic nature or disabling impact of the illness. VA officials acknowledged that their current reports could be misinterpreted. They told us that VA will assess the extent of the problem and take the necessary corrective action. We obtained comments on a draft of this report from VA officials, including the Deputy Under Secretary for Benefits. The officials generally agreed with our findings and noted that the agency is taking additional steps to address the concerns that we raised. Specifically, VA officials reiterated their commitment to providing veterans with better information regarding acceptable evidence to support undiagnosed illness claims and to more accurately categorize the reasons that claims are denied. The officials told us that VA’s central office will also undertake additional claims reviews to ensure that field offices are following all appropriate procedures. VA’s comments included some technical changes, primarily for clarification, which we incorporated in this report as appropriate. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to the Chairman, Senate Committee on Veterans’ Affairs; the Secretary of Veterans Affairs; and other interested parties. This work was performed under the direction of Irene Chu, Assistant Director, Health Care Delivery and Quality Issues. If you or your staff have any questions, please contact Ms. Chu or me on (202) 512-7101. Other major contributors to this report are listed in appendix II. To identify the evidence standards that VA established to process Persian Gulf War claims, we visited the VA central office in Washington, D.C., and two of the four area processing offices that VA designated as responsible for processing undiagnosed illness claims—Louisville, Kentucky, and Nashville, Tennessee (which together processed 72 percent of undiagnosed illness claims). We also conducted telephone discussions with officials at the other two area processing offices—Phoenix, Arizona, and Philadelphia, Pennsylvania. We also obtained pertinent documents and records from these offices. To obtain information about the undiagnosed illness disability compensation claims, we statistically sampled 79 of the 4,990 claims that VA had denied as of September 21, 1995. We randomly selected the claims from VA’s database of completed Persian Gulf War claims. Our sample size provides a 90-percent confidence level that the characteristics of our sample match the total population of denied claims within a specified error rate. The error rate was no greater than 11 percent. We also reviewed the claims files of 26 randomly selected veterans from the 273 whose claims for undiagnosed illnesses had been granted as of September 21, 1995. We selected four granted claims each from the Nashville, Louisville, and Philadelphia offices and 14 from the Phoenix office. We selected additional claims from the Phoenix office because it had processed 32 percent of all granted claims although it only processed 11 percent of all Persian Gulf claims. This was not a statistical sample; therefore, the results cannot be projected to the universe of granted claims. Instead, we reviewed these claims to allow a comparison with the denied claims. In conducting our samples we reviewed documents pertinent to our work, including the veterans’ application forms; letters from VA to the veterans about additional evidence; medical examinations; and rating statements with the letters containing VA’s decisions. Data about all Persian Gulf War illnesses and other information were abstracted from those documents and entered into a database for analysis. The purpose of our review of the denied and granted claim files was to identify the evidence contained therein and gain additional information on VA’s reasons and bases for denying or granting the claims. We made no effort to assess the appropriateness of VA’s decisions. We performed our review between August 1995 and March 1996 in accordance with generally accepted government auditing standards. Richard Wade, Evaluator-in-Charge Jon Chasson, Senior Evaluator Robert DeRoy, Assistant Director, Data Analysis and Evaluation Support Cynthia Forbes, Senior Evaluator Michael O’Dell, Senior Social Science Analyst Susan Poling, Assistant General Counsel Pamela A. Scott, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the procedures the Department of Veterans Affairs (VA) uses to process Persian Gulf War undiagnosed illness claims. GAO found that: (1) before VA will provide benefits, veterans must provide it with evidence of a chronic disability and verifiable evidence of time lost from work, prior medical treatment, or changes in appearance, physical abilities, or psychological condition; (2) both denied and approved claims consist primarily of service medical records and VA medical examinations, but approved claims usually include an independent medical history and sometimes include nonmedical evidence; (3) denied claims lacked sufficient evidence because of poor VA procedures and veterans' failure to collect relevant information; and (4) while VA reports that most denied claims were denied because the alleged disability did not become evident during active duty or the subsequent 2-year presumptive period, it stated in denial letters to veterans that their claims lacked sufficient evidence.
The Privacy Rule addresses the use and disclosure of individuals’ health information and establishes individuals’ rights to obtain and control access to this information. Specifically, the rule covers “protected health information,” defined as individually identifiable health information that is transmitted or maintained in any form. It applies to “covered entities,” defined as health plans, health care clearinghouses, and health care providers that transmit information electronically with respect to certain transactions. The protections under the Privacy Rule extend to all individuals, regardless of the state in which they live or work, but the rule does not preempt state privacy laws that are more stringentthat is, more protective of health information privacy. Under the Privacy Rule, a covered entity may use and disclose an individual’s protected health information without obtaining the individual’s authorization when the information is used for treatment, payment, or health care operations. Protected health information may also be disclosed without an individual’s authorization for such purposes as certain public health and law enforcement activities, and judicial and administrative proceedings, provided certain conditions are met. In addition, an individual’s authorization is not required for disclosures for research purposes if a waiver of authorization, under defined criteria, is obtained from an institutional review board (IRB) or a privacy board. Except where the rule specifically allows or requires a use or disclosure without an authorization, the individual’s written authorization must be obtained; for example, authorization is generally required for disclosures to life insurers or employers. In addition, the rule contains specific provisions that generally require an individual’s authorization for the use or disclosure of psychotherapy notes or of protected health information for marketing purposes. In many circumstances, a provider or health plan can choose not to disclose information, regardless of whether an individual’s authorization is required. The Privacy Rule allows covered entities to use their discretion in deciding whether to disclose protected health information for many types of disclosures, such as those to family and friends, public health authorities, and health researchers. The Privacy Rule provides individuals with a number of rights regarding access to, and use of, their health information. Specifically, the rule provides the following: Access to and amendment of health information. Individuals have the right to inspect and copy their protected health information and to request amendments of their records. Notice of privacy practices. Individuals generally have a right to written notice of the uses and disclosures of their health information that may be made by a covered entity as well as the individual’s rights and the entity’s duties with respect to that information. Accounting for disclosures. Individuals generally have the right to request and receive a listing of disclosures of their protected health information that is shared with others for purposes other than treatment, payment, or health care operations. Complaints. In addition to being able to complain directly to a covered entity, any person who believes a health care provider, health plan, or clearinghouse is not complying with the Privacy Rule may file a complaint with the Secretary of HHS. Covered entities are required to comply with Privacy Rule provisions and follow various procedures. They must do the following: Develop policies and procedures for protecting health information. A covered entity must maintain administrative, technical, and physical safeguards. Among other requirements, a covered entity must also designate a privacy official, train its employees on the entity’s privacy policies, and develop procedures to receive and address complaints. Limit information used and disclosed to the minimum necessary. Covered entities must make reasonable efforts to limit their employees’ access to identifiable health information to the minimum needed to do their jobs. When sharing protected health information with other entities (such as collection agencies and researchers), they must make reasonable efforts to limit the information disclosed to the minimum necessary to accomplish the purpose of the data request. However, providers may share the full medical record when the disclosure is for treatment purposes. Account for disclosures of protected health information. Upon request, covered entities must provide individuals with an accounting of disclosures of their protected health information made in the preceding 6 years. This requirement applies to most disclosures other than those for treatment, payment, or operations purposes, including those that are mandated by lawsuch as certain disclosures to public health entities and law enforcement agencies. The accounting must include the date of each disclosure; the name and, if known, the address of the entity or person who received the information; a description of the information disclosed; and a statement of the purpose of the disclosure. Ensure that “downstream users” protect the privacy of health information by implementing business associate agreements. Covered entities must enter into a contract or other written agreement with any business associates with which they share protected health information for various purposes. A business associate performs certain functions or activities—such as claims processing and benefit management—on behalf of a covered entity involving the use or disclosure of individually identifiable health information. Business associate contracts must establish conditions and safeguards for uses and disclosures of identifiable health information and authorize termination of contracts if the covered entities determine that business associates have violated the agreements. The regulation establishes requirements that apply to both federally and privately funded research that seeks to use protected health information: Researchers may seek to obtain from covered entities health information without authorization if the data do not identify an individual and there is no reasonable basis to believe it could be used to identify an individual. Researchers must use one of three options to gain access to protected health information: obtain patient authorization, obtain a waiver of authorization by having their research protocol reviewed and approved by an IRB or privacy board, or use a limited data set provided by the covered entity. OCR has responsibility for implementing and enforcing the Privacy Rule as follows: Provide guidance. OCR is responsible for communicating policies contained in the Privacy Rule by issuing guidance to answer common questions and clarify certain provisions. Mechanisms by which OCR makes information available to various entities on its Web site include links to guidance documents as well as answers to frequently asked questions (FAQ). In addition, OCR has provided guidance through roundtable discussions, answers to written inquiries, an automated e- mail notification system, a toll-free hotline for questions about the Privacy Rule, as well as presentations and telephone conference calls. Administer a complaint process. OCR is responsible for investigating complaints received from health care consumers. Enforce compliance. OCR may provide covered entities with technical assistance to help them comply voluntarily with the Privacy Rule. OCR investigates complaints and may conduct reviews to determine if covered entities are in compliance and attempts to resolve issues of noncompliance through informal means. Violators are subject to civil and criminal penalties. OCR administers the civil monetary penalties while the Department of Justice administers criminal penalties involving a knowing disclosure or obtaining identifiable health information in violation of HIPAA. Organizations representing providers and health plans stated that implementation of the Privacy Rule was smoother than expected over the past year and that some initial confusion has abated. Although many provider and health plan organizations reported dealing with various ongoing problems, they noted that two provisions were particularly burdensome: the requirement to maintain a record of certain disclosures of patient information and the requirement to create business associate agreements with downstream users of protected health information. Several organizations suggested that OCR could take steps to facilitate compliance with these provisions. Some organizations we interviewed told us that the first year they were required to be compliant with the Privacy Rule was smoother than they had anticipated. The American Medical Association and the American Hospital Association stated that in general, they have heard relatively few negative reactions from their members during the past year. Many provisions were considered straightforward and relatively easy to implement, including developing the notice of privacy practices and limiting disclosures for marketing purposes. In addition, many provider, health plan, and consumer representatives reported that the Privacy Rule has increased provider awareness of, and sensitivity to, patient privacy issues, and new privacy procedures have become routine practice. For example, representatives from the American Health Information Management Association (AHIMA)—which assists providers with their management of protected health information—noted that the Privacy Rule has helped to make staff working for covered entities more aware of the flow of patient information. Organizations we interviewed also reported that some early confusion has subsided. Groups commented that initial confusion stemmed from challenges in understanding and implementing the Privacy Rule. The American Hospital Association, for example, stated that hospitals were initially concerned about the requirement to limit information disclosures to the “minimum necessary” but now understand that they can share the information needed to ensure that appropriate clinical care is provided to their patients. Representatives from the American Pharmacists’ Association (APhA) stated that members faced initial confusion implementing the Privacy Rule, but that pharmacies have since developed new standard procedures to address these issues. Representatives of the American Medical Association noted that after receiving and resolving many calls requesting clarification early in the year, it has since received few calls from its members related to the Privacy Rule. However, organizations also commented that some uncertainties and misunderstandings continue. For example, provider groups stated that some physicians and hospitals remain unclear about what type of information may be disclosed for law enforcement purposes. In addition, health plan representatives reported ongoing difficulties associated with knowing whether state laws prevail over the Privacy Rule. Despite these problems, AHIMA representatives told us that “the number of people talking about the ship sinking” because of the Privacy Rule has decreased. Overall, the organizations had mixed opinions about the extent to which OCR’s guidance facilitated implementation of the Privacy Rule. As of June 29, 2004, OCR has posted 223 FAQs and answers on its Web site. While some provider and health plan representatives reported that the OCR Web site—particularly the FAQs—was very helpful, others stated that the FAQs were not specific enough to explain certain vague or ambiguous Privacy Rule provisions. Furthermore, organizations we interviewed stated that various types of guidance offered by OCRincluding roundtable discussions and guidance on particular provisions—would have been more helpful if they had been offered sooner. For example, representatives from the American Health Care Association (AHCA) stated that if they had received clarification and guidance from OCR earlier, they would have had fewer problems implementing the rule. Although provider and health plan representatives reported dealing with a variety of ongoing problems, we consistently heard from them that two provisions were especially burdensome. These were the provisions that require accounting for disclosures and business associate agreements. Most provider and health plan organizations we interviewed identified the requirement to account for certain disclosures as unnecessarily burdensome. These organizations reported that significant time and resources are needed to establish and maintain systems to track disclosures. For example, in hospitals, various departments keep patient information in separate systems that are not necessarily electronically linked. According to the Health Care Compliance Association, hospitals have had to revise systems to establish electronic links or have had to create manual tracking mechanisms. Similarly, representatives from America’s Health Insurance Plans (AHIP) reported that many health plans or insurers generally keep information related to one patient in multiple systems—for example, separate systems for enrollment, claims payment, and customer service—making it difficult to track all information disclosures for that patient. In addition to difficulties experienced when tracking disclosures of protected health information, provider and health plan representatives also expressed concern about the volume of disclosures that must be tracked. They commented that frequent, diverse disclosures required by law add significantly to the volume of information that must be continually tracked. These include disclosures to public entities to maintain disease registries, vital statistics, and other health databases. For example, the Minnesota Department of Public Health identified over 50 state statutes in which health information may or must be released to specific state or local organizations, such as health departments, health licensing boards, and schools. Blue Cross Blue Shield Association (BCBSA) representatives told us that accounting for the disclosures of births and deaths to state health departments—required by state law—can be burdensome. They noted that some state laws require health plans to report information to the health department quarterly, while others require reporting information monthly. One organization we spoke with indicated that its members expect that complying with the provision to account for disclosures will become increasingly difficult, because they need to track these disclosures for 6 years to meet obligations under the Privacy Rule. Moreover, many organizations we interviewed questioned whether the Privacy Rule’s accounting provision generates much benefit for patients. These organizations reported that their members have received few or no requests from patients for an accounting of the disclosures of their protected health information. To somewhat reduce the burden of the requirement to account for disclosures, several organizations suggested that OCR modify the rule to require covered entities to inform patients in the privacy practices notice that when required by law, their information will be disclosed to public health organizations and law enforcement agencies. This modification would inform patients of disclosures required by law and would obviate the need to track these disclosures as they occur. Provider and health plan representatives reported that significant resources have been required to implement business associate agreements. These organizations commented that some of the burden associated with implementing this provision has stemmed from confusion and variation in determining which relationships with downstream entities require business associate agreements. The Medical Group Management Association (MGMA) stated that there is still uncertainty among its members and that it receives calls weekly about business associate agreements. APhA representatives attributed pharmacists’ difficulties determining which entities were business associates to the provision’s broad language and lack of adequate OCR guidance. Although the Privacy Rule provided for phased-in implementation of business associate agreement requirements to accommodate existing contracts, provider and health plan groups viewed the business associate agreements provision as very burdensome. Organizations we interviewed stated that some of their members have spent substantial amounts of time and money to develop thousands of business associate agreements with downstream users of protected health information, though they did not estimate specific amounts. Provider and health plan representatives reported that high costs have been associated with the need for legal counsel to negotiate and customize agreements with the multiple and various business associates. For example, BCBSA officials stated that some of their business associates have requested specific and sometimes “excessive” details in their agreements. They noted that business associates sometimes regard the agreements as an opportunity to include new provisions in their contracts that are unrelated to health privacy. The Joint Commission on Accreditation of Healthcare Organizations (JCAHO), however, was able to successfully avoid these types of problems by including a standard business associate agreement as an addendum to applications for health care accreditation. As a result, it has had “excellent compliance and cooperation from accredited entities,” according to JCAHO representatives. In contrast, hospitals and other providers negotiating individually with business associates do not have similar leverage to compel the use of their particular agreements. Some organizations representing providers and health plans suggested that OCR provide more guidance to covered entities about when and how to enter into a business associate agreement. These organizations did not consider OCR’s existing guidance specific enough to assist providers and health plans with their agreements. APhA representatives stated that OCR’s guidance on business associate agreements has “led to more questions.” Organizations representing public health agencies, research entities, and patient advocates identified several areas in which efforts to apply the Privacy Rule have created new challenges. State and federal agencies reported having to take explicit action—including outreach efforts and changes in state law—to ensure that providers and health plans continue to report health information for public health activities. Researchers pointed to increased difficulty in obtaining patient data to conduct clinical or health services research. Patient advocates also identified obstacles in obtaining protected health information from providers and plans on behalf of their clients. Many of these challenges have been attributed to misunderstandings or confusion about how to interpret the rule in conjunction with other federal requirements. Most organizations found providers reluctant to share information without patient authorization when the rule permitted providers such discretion. The burden of accounting for disclosures and liability concerns were two reasons often cited for their reluctance. Organizations representing state public health officials told us that the Privacy Rule has hindered access to patient health information because some providers are reluctant to report to public health authorities. They experienced this difficulty despite the fact that under the Privacy Rule, providers and health plans may report to public health authorities without a patient’s authorization. This provision applies both where a law requires that certain health information—such as immunizations—be reported and where a public health agency requests that providers voluntarily report certain information. Public health organizations—such as the Council of State and Territorial Epidemiologists (CSTE) and CDC—reported several cases where obtaining patient health information has become more difficult. For example, a CSTE survey of 40 state and local programs designed to detect early signs of an epidemic found that 3 programs experienced “substantial” problems and 10 experienced “some” problems with obtaining health information from providers because of patient confidentiality concerns. In another example, a CDC representative reported facing obstacles to its surveillance of mental health disabilities. CDC’s efforts to collect data on individuals with certain mental health diagnoses met resistance from a large clinic and an inpatient mental health facility. As a result, CDC redesigned its study and had to approach different providers to participate in its data collection effort. Public health organizations attributed the difficulty in obtaining public health data from providers and plans to several factors. First, organizations we spoke with believed that providers have a disincentive to report data requested by public health agencies because of the provision to account for such disclosures. According to a state public health agency representative, the necessary tracking of disclosures has had a major impact on the state’s public health activities. This is consistent with concerns expressed by representatives of health plans, physicians, hospitals, and long-term care facilities about the burden of accounting for certain disclosures. Second, some providers were confused about the rule in that they believed they were permitted to report to public health agencies only when specifically required by federal or state law. A representative of CDC noted that in some states that did not mandate reporting of birth defect surveillance data, providers were initially unwilling to disclose this information. Third, state officials noted that providers are concerned legal action might be taken against them if they provide health information to public agencies. In CDC’s efforts to monitor mental health disabilities, a provider cited fear of liability associated with improper disclosure of protected health information as the reason it declined to participate. The organizations we interviewed also reported that state and federal health agencies have taken various actions to facilitate public health reporting. These include changes in state law, enhancements to the data collection process, and targeted Privacy Rule education. For example, Kentucky, Massachusetts, and North Dakota revised regulations and laws to clarify the circumstances for reporting to public health agencies without patient authorization, to make state law more consistent with the Privacy Rule, and to make certain public health reporting mandatory. CDC modified its survey procedures for a group of health care provider surveys, known as the National Health Care Survey, to help providers participate in the surveys under the Privacy Rule. The modifications included creating a document that providers can use to account for disclosures. The Minnesota Department of Health developed a series of fact sheets that clarify, for each of several different types of disease reporting, the specific authority in the Privacy Rule that allows reporting of data to the department without patient authorization. Like the health plan and provider groups, organizations representing public health agencies stated their desire that the Privacy Rule be amended to exempt reporting to public health agencies from the accounting provision and announce in the privacy practices notice that this information will be disclosed as required by law. They contended that this approach would significantly reduce burden and remove the incentive that exists for providers to avoid disclosure of protected health data to public health agencies. Organizations representing health services and clinical researchers, such as Academy Health, the Association of American Medical Colleges, the Association of Clinical Research Organizations, and the National Cancer Advisory Board, reported that access to data for research has been delayed due to the varying approaches that some providers are taking to research requests under the Privacy Rule. They reported that research studies involving several sites of care have been delayed because of the different confidentiality requirements at study provider sites. Under the rule, researchers must obtain IRB or privacy board approval for their studies to waive the patient authorization requirement. HHS guidance states that a multisite research study need obtain approval from only one of the provider sites, but researchers’ organizations contend that often each provider institution requires that its IRB approve the waiver request. They noted that meeting the requirements of multiple IRB reviews can add substantial time to completing these studies. Under the Privacy Rule, researchers seeking authorization to use patient information must pursue their requests through the patients’ providers. Organizations reported that smaller providers with more limited administrative resources—such as some group practices and rural community hospitals—are reluctant to facilitate research studies because of misunderstanding of the rule and the added burden of contacting patients. Providers may also decline to participate because of concern about liability and because of the administrative burden of the accounting for disclosures requirement. For example, the Association of American Medical Colleges reported that some physicians no longer contribute data to research registries for cancer because of the additional resources required to track these disclosures. Another issue raised by several organizations we spoke with concerned the perceived conflicts between the Privacy Rule and federal regulation governing the protection of human subjects in research, known as the Common Rule. Research groups noted that differences between Privacy Rule and Common Rule requirements may cause confusion among researchers and covered entities and create unnecessary obstacles to research. For example, they stated that one difference relates to the scope of authority of informed consent or authorization: informed consent by patients under the Common Rule covers the research effort as a whole, including future disclosures from registry and data depositories. In contrast, they noted that a patient’s authorization or an IRB’s waiver of authorization covers only a specific research study and not future unspecified research under the Privacy Rule. Some national organizations expressed concern that providers and health plans may find it too confusing to comply with both the Privacy Rule and Common Rule requirements in responding to research proposals and requests. An AHIMA official reported that in some cases, providers and health plans “just threw up their hands and said they would just not give information to researchers.” CMS—a source of health services utilization data on Medicare beneficiaries—did not approve research requests for approximately 6 months while it developed new criteria and procedures for review of research requests to comply with the Privacy Rule. CMS now requires that researchers, who submit about 1,000 requests each year, provide more information about their study methodology and demonstrate that their research purpose is consistent with CMS’s mission. To comply with the Privacy Rule, CMS established a privacy board to review research requests. The board meets once a month, which lengthens this phase of CMS’s research approval process. The Association of American Medical Colleges, the Association of Clinical Research Organizations, and public health organizations such as the Association of State and Territorial Health Officials and CSTE reported that OCR’s guidance has not addressed some of the key misunderstandings and fundamental problems associated with the Privacy Rule’s impact on research. Ambiguity remains in determining whether a health survey activity is considered health care operations or research and whether a public health entity’s data request is part of its public health activities or is for research. These organizations stated their desire for OCR to address concerns through official revisions to the rule and issuance of federal guidance. They believe that compared with OCR’s efforts to provide information on its Web site, such official actions would “carry more weight” among providers, health plans, and research organizations. Organizations representing patient advocates reported that their members face new obstacles when seeking access to protected health information on behalf of patients. Such access problems, they say, are due to excessive paperwork, misunderstanding of the rule, and reluctance by providers and health plans to share information with legal aid attorneys, state ombudsmen, and others when the rule permits discretion. The rule gives providers and plans some latitude in exercising their professional judgment about when to disclose protected health information to individuals serving as patient advocates who are not “personal representatives” as defined by the Privacy Rule. Factors such as liability concerns and the burden of accounting for disclosures may contribute to their guarded disclosure practices. Representatives for Families USA’s Health Assistance Partnership and the National Health Law Program reported problems when lawyers or other patient advocates sought a client’s medical records. These organizations contend that some providers deny access and other providers delay or restrict access by requiring the use of a provider’s customized authorization form. They asserted that it can be cumbersome if a patient’s signature on multiple unique forms needs to be obtained from each provider. These organizations also noted that state ombudsmen services—telephonic programs that assist consumers, such as the elderly and disabled, with problems accessing health care—have had problems intervening on behalf of consumers over the telephone. Even after a consumer has given verbal approval, providers have declined to share information with the ombudsman in subsequent phone calls if the patient is not also on the telephone. In addition, AHIP, AHCA, and BCBSA reported that families and friends of patients continue to face problems obtaining information to assist in patients’ care. BCBSA reported that some plans are confused about how to implement the Privacy Rule’s provisions for releasing information to families, friends, and others. Where the rule permits discretion, some covered entities have taken a strict approach to patient authorization requirements, requiring any adult calling on behalf of another adult to obtain an authorization form signed by the patient. For example, this approach resulted in one health plan requiring 10,000 patient authorizations during the first year. Similarly, AHCA found that some long-term care facilities have taken a strict approach to disclosing information and do not provide information to nursing home residents’ family members without patient authorization. AHCA also reported that the Privacy Rule does not address a potential conflict with the Omnibus Budget Reconciliation Act of 1987 that requires nursing homes to notify families of incidents or significant changes in health status unless the resident exercises the right to privacy. Under the Privacy Rule, a provider may, in certain situations, determine whether or not to share information with family based on professional judgment. Numerous organizations reported that patients are not aware of their rights under the Privacy Rule, either because they do not understand the notice of privacy practices, or because they have not focused their attention on privacy issues when the notices are presented to them. In the first year after entities were required to be compliant with the Privacy Rule, OCR received over 5,600 privacy complaints and closed about half of the complaint cases filed. Nearly two-thirds of the closed cases were resolved on the basis that they were outside the scope of the Privacy Rule, suggesting that patients may misunderstand their rights. Consumer groups—including AARP, the Bazelon Center for Mental Health Law, the Health Privacy Project, the Health Assistance Partnership, and the National Health Law Programreported that many patients are not aware of their privacy rights. They attribute this, in part, to the use of customized privacy notices. For example, consumer groups reported that typical privacy notices, as drafted by providers and health plans, are often difficult to read and understand. The Health Privacy Project maintained that the privacy notices are written primarily to protect providers and health plans from enforcement actions, rather than as a vehicle to inform the patient. It noted that even basic information about disclosures and the right to access records is often buried in the document. Representatives of providers and health plans also stated that patients are largely unaware of their rights. According to AHIMA, patients are unaware of their privacy rights because the privacy notice is treated as one more piece of paper that they have to sign when they seek care. MGMA noted that some physicians have placed boxes in their offices specifically for the purpose of recycling the notices after patients discard them. Representatives from both provider and consumer groups noted that the public should receive more education about how their rights have changed. MGMA told us that OCR has placed the burden of patient education on private organizations—such as professional associations, providers, and health plans—and that some of these organizations interpret the rule incorrectly. Moreover, provider and consumer groups stated that further OCR attention is needed to address the issue of privacy notices that are difficult for patients to read and understand. Some groups told us that the notice of privacy practices could be made easier to comprehend by highlighting some key patient rights under the Privacy Rule. In the first year that entities were required to be compliant with the Privacy Rule, consumers and others filed 5,648 privacy-related complaints with OCR. The number of complaints received increased steadily from quarter to quarter, with each quarter’s intake totaling 1,068, 1,392, 1,521, and 1,667, respectively. Overall, roughly half of the complaints filed in the rule’s first year were closed as of early May 2004. The database that OCR maintains on these complaints includes information that classifies one or more privacy issues raised in several broad categories. Data on the open and closed cases showed that the most commonly cited category (56 percent of complaints) was “impermissible uses and disclosures.” According to an OCR official, this could include allegations regarding patient billing information sent to the wrong address or FAX number, patient information seen or overheard in a doctor’s office or hospital, or provider employees accessing patient information for their own personal or business benefit. Approximately a third of the complaints cited inadequate safeguards for patient information, and 17 percent reported problems with patients gaining access to their own health information. Patients have filed privacy complaints against many different types of health care entities. The two most commonly cited were private practices—comprising physicians, dentists, chiropractors, and similar licensed health professionals—and hospitals—including general, psychiatric, and specialty hospitals. Together, private practices and hospitals accounted for 41 percent of privacy complaints with information on entity type recorded. For closed cases, the OCR database provides additional information, primarily related to the final disposition of the complaint. The majority of these complaints—79.1 percent—were not germane to the Privacy Rule, lacked sufficient information to process them, or fell into diverse miscellaneous categories. That left 20.9 percent of the closed privacy complaints that OCR concluded fell within the scope of the Privacy Rule (see table 1). About half of the germane complaints (representing 9.4 percent of total closed cases) involved a violation of the Privacy Rule substantiated by OCR’s investigation where the provider or plan agreed to correct its policies or procedures. For the rest of these germane complaints (11.5 percent of total closed cases), OCR determined that no violation had occurred. By May 2004, OCR had not recommended sanctions against any provider or health plan for privacy violations, but this remained a potential outcome for the first-year complaints that were still open at that point. Nearly two-thirds of the privacy complaints closed during the rule’s first year of operation fell outside the scope or time frame of the rule. This included the 35.4 percent of closed privacy complaints that involved alleged actions by providers, health plans, or other entities that OCR determined would not constitute violations of the regulation even if true. In other words, they concerned actions to which the patient might object, but that were not prohibited by the Privacy Rule. An additional 17.7 percent of closed complaints involved entities that were not “covered entities” as defined by the Privacy Rule, and 9.6 percent cited actions that occurred before covered entities were required to be compliant. However, OCR officials stated that the proportion of complaints closed because they were not germane to the Privacy Rule may have been higher in the first year of the rule’s implementation than it will be in later years because OCR can generally complete its processing of such complaints more quickly than complaints that require full-scale investigations. Just over half of the complaints received in the first year remained open in early May 2004. Finally, about 15 percent of closed complaints fell into one of a number of miscellaneous categories or, more commonly, could not be pursued because OCR did not receive, and could not obtain, critical information. For example, some complaints lack addresses or telephone numbers by which the persons filing the complaints could be contacted for more information. Closed complaints involving three major categories of providers—private practices, hospitals, and pharmacies—were more likely to be judged germane under the Privacy Rule by OCR than were complaints about other organizations. Nevertheless, for each of these major provider types, as well as for all other entities cited in privacy complaints, OCR found that a clear majority of the complaints it closed were not germane to the regulation because they either involved accusations of actions that were not prohibited by the regulation, involved entities that were not “covered entities” as defined by the Privacy Rule, or involved actions that occurred before covered entities were required to be compliant (see fig. 1). The similarity of this pattern across different types of entities suggests that patients may misunderstand the scope of the protections provided to them under the Privacy Rule. The pattern is also consistent with consumer advocates’ opinions concerning the limitations of privacy notices in informing patients about their rights under the Privacy Rule. Overall, in its first year, HIPAA’s Privacy Rule has resulted in both positive and negative experiences among covered entities and other users of health information. Health care staff have been sensitized to privacy issues and the procedures required of their organizations to protect patient health information. Providers and health plans have taken steps to develop working environments that are sensitive to patient privacy and to enhance staff understanding of how to handle the complexities of complying with the Privacy Rule. However, some operational issues and misconceptions about the rule continue to raise concerns. A prime example is the requirement to account for disclosures for public health purposes that are mandated by law. This requirement is seen by many to have created a costly and unnecessary demand on providers and health plans and a drag on the flow of information for purposes considered to be in the public interest. Providers and health plans that are uncertain or misinformed about their privacy responsibilities have often responded with an overly guarded approach to disclosing information, resulting in procedures that may be more protective of the organizations than necessary to ensure compliance with the Privacy Rule. At the same time, the job of educating the public about the content and intent of the Privacy Rule has been relegated to providers and health plans and their privacy notices have not consistently provided a clear message to patients. We recommend that to reduce unnecessary burden on covered entities and to improve the effectiveness of the Privacy Rule, the Secretary of HHS take the following two actions: Modify the Privacy Rule to (1) require that patients be informed in the notice of privacy practices that their information will be disclosed to public health authorities when required by law and (2) exempt such public health disclosures from the accounting-for-disclosures provision. Conduct a public information campaign to improve awareness of patients’ rights under the Privacy Rule. In written comments on a draft of this report, HHS agreed with our finding that implementation went more smoothly than expected during the first year, confusion has diminished, and new privacy procedures have become routine practice for staff. They stated that the experience of providers and health plans in implementing the Privacy Rule, as we reported, were generally consistent with what HHS has heard from many covered entities and others. (See app. II.) Regarding our recommendation that mandatory reporting of health information to public health authorities be exempted from the accounting for disclosure requirement, HHS noted that it has considered such a change in the past and continues to monitor the need to modify the Privacy Rule. In August 2002, HHS considered exempting public health disclosures from the accounting provisions whether required by law or not, but decided against such a modification pending further experience with the rule. HHS acknowledged that covered entities continue to report difficulties tracking such disclosures and stated that its guidance documents emphasize flexibility in how covered entities structure their record keeping. Given HHS’s goal of ensuring effective patient privacy protections without imposing unnecessary costs or barriers to quality health care or interfering with other important public benefits, we remain concerned that the accounting for disclosure provision as applied to mandatory public health reporting may not support this goal. Effective privacy notices could be used to inform patients of public health disclosures required by law and, in turn, reduce the need to track these numerous disclosures. Furthermore, public health officials noted that the burden imposed by accounting for legally required disclosures may generate the unintended consequence of reducing the amount of information voluntarily reported to public health authorities. To the extent that covered entities are discouraged in this way, the public interest may be negatively affected. In commenting on our second recommendation, to conduct a public information campaign to improve awareness of patient’s rights under the Privacy Rule, HHS agreed that notices of privacy practices may appear too long and complicated and that consumers may not be closely reading their notices. HHS stated that the complaint data received by OCR may not indicate that consumers are unaware of their rights under the rule, but rather that they may not properly understand them. Regarding its consumer outreach, HHS pointed to two new consumer fact sheets posted to its Web site on August 17, 2004, a toll-free call-in line to respond to questions about the rule, and efforts to encourage covered entities to develop consumer-friendly notices that highlight key information. Evidence from numerous organizations indicated that consumers are largely unaware of their rights under the Privacy Rule, and our analysis of OCR complaint data suggested that consumers may misunderstand the scope of the protections provided. A more diverse approach to consumer outreach may be necessary to effectively communicate the new privacy rights. The information available on the HHS Web site and from the call-in line provide access to a portion of the general public but may not reach the many consumers who do not know of these sources. We believe it is important that, in current and future efforts to educate the public, HHS more effectively disseminate information about protections provided under the Privacy Rule. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to the Secretary of HHS and to other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me at (312) 220-7600. Another contact and key contributors are listed in appendix II. We included the following national organizations and federal agencies in our review. America’s Health Insurance Plans Blue Cross Blue Shield Association Medicare (HHS’s Centers for Medicare & Medicaid Services) In addition to the contact named above, Kelly L. DeMots, Mary F. Giffin, Eric A. Peterson, and Lisa M. Vasquez made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Issued under the Health Insurance Portability and Accountability Act of 1996, the Privacy Rule provided new protections regarding the confidentiality of health information and established new responsibilities for providers, health plans, and other entities to protect such information. GAO reviewed (1) the experience of providers and health plans in implementation; (2) the experience of public health entities, researchers, and representatives of patients in obtaining access to health information; and (3) the extent to which patients appear to be aware of their rights. Organizations representing providers and health plans told us that implementation of the Privacy Rule went more smoothly than expected during the first year after most entities were required to be compliant. In addition, they reported that new privacy procedures have become routine practice for their members' staff. However, provider and health plan representatives also raised a variety of issues about provisions that continue to be problematic. In particular, many organizations emphasized that two provisions--the requirement to account for certain information disclosures and the requirement to develop agreements with business associates that extend privacy protections "downstream"--are unnecessarily burdensome. Some organizations suggested that difficulties with these provisions could be ameliorated with modification of certain provisions and further guidance from the Department of Health and Human Services' Office for Civil Rights (OCR). Organizations reported a number of challenges faced by entities that rely on access to health information for public health monitoring, research, and patient advocacy. Public health entities noted that some states have had to take concerted action to ensure that providers' concerns about complying with the Privacy Rule do not impede the flow of important information to state health departments and disease registries. Some research groups asserted that the rule has delayed clinical and health services research by reducing access to data. Some consumer advocacy groups told us that patients' families, friends, and other representatives have experienced unnecessary difficulty in assisting patients. These groups perceived that while providers and plans are allowed, in certain cases, to disclose health information without written patient authorization, they are reluctant to do so. Consumer and provider representatives contend that the general public is not well informed about their rights under the Privacy Rule. According to these organizations, patients may not understand the privacy notices they receive, or do not focus their attention on privacy issues when the notices are presented to them. Some evidence of patients' lack of understanding is reflected in the 5,648 complaints filed with OCR in the first year after the Privacy Rule took effect. Of the roughly 2,700 complaint cases OCR closed as of April 13, 2004, nearly two-thirds were found to fall outside the scope of the Privacy Rule because they either involved accusations of actions that were not prohibited by the regulation, involved entities that were not "covered entities" as defined by the Privacy Rule, or involved actions that occurred before covered entities were required to be compliant. Of those cases that were germane to the rule, OCR determined that about half represented cases in which no violation had occurred.
OPM and the Equal Employment Opportunity Commission (EEOC) each play important roles in ensuring equal employment opportunity (EEO) in the federal workplace through their leadership and oversight of federal agencies. In their oversight roles, OPM and EEOC require federal agencies to analyze their workforces, and both agencies also report on governmentwide representation levels. Under OPM’s regulations implementing the Federal Equal Opportunity Recruitment Program (FEORP), agencies are required to determine where representation levels for covered groups are lower than in the civilian labor force and take steps to address those differences. Agencies are also required to submit annual FEORP reports to OPM in the form prescribed by OPM. EEOC’s Management Directive 715 (MD-715) provides guidance and standards to federal agencies for establishing and maintaining effective equal employment opportunity programs, including a framework for executive branch agencies to help ensure effective management, accountability, and self-analysis to determine whether barriers to equal employment opportunity exist and to identify and develop strategies to mitigate or eliminate the barriers to participation. Specifically, EEOC’s MD-715 states that agency personnel programs and policies should be evaluated regularly to ascertain whether such programs have any barriers that tend to limit or restrict equitable opportunities for open competition in the workplace. The initial step is for agencies to analyze their workforce data with designated benchmarks, including the civilian labor force. If analyse of their workforce profiles identify potential barriers, agencies are to examine all related policies, procedures, and practices to determine whether an actual barrier exists. EEOC requires agencies to report the results of their analyses annually. In addition, EEOC recently issued a report on the participation of individuals who reported targeted disabilities in the federal workforce. Targeted disabilities are those disabilities that the federal government, as a matter of policy, has identified for special emphasis. The targeted disabilities are deafness, blindness, missing extremities, partial paralysis, complete paralysis, convulsive disorders, mental retardation, mental illness, and distortion of limb and/or spine. The data that we are reporting provide a demographic snapshot of the career SES as well as the levels that serve as the SES developmental pool for October 2000 and September 2007. Table 1 shows that governmentwide, the number and percentage of women and minorities in the career SES and SES developmental pool increased between October 2000 and September 2007. As shown in table 2, the percentage of both women and minorities in the SES increased in 15 of the 24 CFO Act agencies by 2007. For the remaining CFO Act agencies, most experienced an increase in either the percentage of women or minorities between October 2000 and September 2007. As we reported in 2003, the gender, racial, and ethnic profiles of the career SES at the 24 CFO Act agencies varied significantly in October 2000. The representation of women ranged from 13.7 percent to 41.7 percent, with half of the agencies having 27 percent or fewer women in the career SES. For minority representation, rates varied even more and ranged from 3.1 percent to 35.6 percent, with half of the agencies having less than 15 percent minorities in the career SES. In 2007, the representation of women and minorities, both overall and in more than half of the individual agencies, was higher than it was in October 2000. The representation of women ranged from 19.9 percent to 45.5 percent with more than half of the agencies having 30 percent or more women. For minority representation, rates ranged from 6.1 percent to 43.8 percent, with more than half of the agencies having over 16 percent minority representation, and more than 90 percent of the agencies having more than 13 percent minority representation in the career SES. For this report, we did not analyze the factors that contributed to the changes in representation from October 2000 through September 2007. As we said previously, OPM and EEOC, in their oversight roles, require federal agencies to analyze their workforces and both agencies also report on governmentwide representation levels. In our 2003 report, we (1) reviewed actual appointment trends from fiscal years 1995 to 2000 and actual separation experience from fiscal years 1996 to 2000; (2) estimated by race, ethnicity, and gender the number of career SES who would leave government service from October 1, 2000, through October 1, 2007; and (3) projected what the profile of the SES would be if appointment and separation trends did not change. We estimated that more than half of the career SES members employed on October 1, 2000, will have left service by October 1, 2007. Assuming then-current career SES appointment trends, we projected that (1) the only significant changes in diversity would be an increase in the number of white women with an essentially equal decrease in white men and (2) the proportions of minority women and men would remain virtually unchanged in the SES corps, although we projected slight increases among most racial and ethnic minorities. Table 3 shows career SES representation as of October 1, 2000, our 2003 projections of what representation would be at the end of fiscal year 2007, and actual fiscal year 2007 data. We projected increases in representation among both minorities and women. Fiscal year 2007 data show that increases did take place among those groups and that those increases generally exceeded the increases we projected. The only decrease among minorities occurred in African American men, whose representation declined from 5.5 percent in 2000 to 5.0 percent at the end of fiscal year 2007. Table 4 shows SES developmental pool representation as of October 1, 2000, our 2003 projections of what representation would be at the end of fiscal year 2007, and actual fiscal year 2007 data. We projected increases in representation among both minorities and women. Fiscal year 2007 data show that increases did generally take place among those groups. The representation of American Indian/Alaska Native men remained unchanged from the October 2000 baseline. As stated previously, we have not analyzed the factors contributing to changes in representation; therefore, care must be taken when comparing changes in demographic data since fiscal year 2000 to the projections we made in 2003, and to the 2007 actual data we present in both tables 3 and 4. For example, we have not determined whether estimated retirement trends materialized or appointment and separation trends used in our projections continued and the impact these factors may have had on the diversity of the SES and its developmental pool. Considering retirement eligibility and actual retirement rates of the SES is important because individuals normally do not enter the SES until well into their careers; thus, SES retirement eligibility is much higher than the workforce in general. As we have said in previous reports, as part of a strategic human capital planning approach, agencies need to develop long- term strategies for acquiring, developing, motivating, and retaining staff. An agency’s human capital plan should address the demographic trends that the agency faces with its workforce, especially retirements. In 2006, OPM reported that approximately 60 percent of the executive branch’s 1.6 million white-collar employees and 90 percent of about 6,000 federal executives will be eligible for retirement over the next 10 years. If a significant number of SES members were to retire, it could result in a loss of leadership continuity, institutional knowledge, and expertise among the SES corps, with the degree of loss varying among agencies and occupations. This has important implications for government management and emphasizes the need for good succession planning for this leadership group. Rather than simply recreating the existing organization, effective succession planning and management, linked to the strategic human capital plan, can help an organization become what it needs to be. Leading organizations go beyond a “replacement” approach that focuses on identifying particular individuals as possible successors for specific top- ranking positions. Rather, they typically engage in broad, integrated succession planning and management efforts that focus on strengthening both current and future capacity, anticipating the need for leaders and other key employees with the necessary competencies to successfully meet the complex challenges of the 21st century. Succession planning also is tied to the federal government’s opportunity to affect the diversity of the executive corps through new appointments. In September 2003, we reported that agencies in other countries use succession planning and management to achieve a more diverse workforce, maintain their leadership capacity, and increase the retention of high-potential staff. Racial, ethnic, and gender diversity in the SES is an important component for the effective operation of the government. Individuals do not typically enter the career SES until well into their careers. As of the end of fiscal years 2000 and 2007, the average age of women and minorities at the time of their appointment to the SES was about age 50 and did not change dramatically over this 7-year period except for certain groups, as shown in table 5. The average age at appointment for American Indian/Alaska Native women declined from age 48 in 2000 to age 42 in 2007 and increased during this time for both American Indian/Alaska Native men (from age 50 in 2000 to 53 in 2007) and white women (from age 47 in 2000 to 49 in 2007). Similarly, the average age of women and minorities at the time of retirement from the career SES did not change much between 2000 and 2007. As shown in table 6, all of those who retired did so, on average, at around age 60, with the exception of Asian/Pacific Islander men, whose average retirement age in 2007 was 64; Hispanic men, whose average retirement age in 2000 was 57 and in 2007 was 58; and African American men, whose average retirement age in 2000 was 62 and 59 in 2007. In addition to examining the average age of individuals at the time of their appointment to and retirement from the career SES, we analyzed the length of time that a cohort of individuals served in the SES and differences in length of service. We reviewed data on the 625 individuals appointed to the career SES in fiscal year 1990. Because of questions with the records of 11 individuals, we excluded them from our analysis and analyzed the records of the remaining 614 individuals appointed to the SES in fiscal year 1990 and followed them through September 2007. We found that 432 of the 614 had left the SES by that date—338 had retired voluntarily, 66 had resigned, and 28 had left for other reasons, such as disability or mandatory retirement. Those individuals who had voluntarily retired served in the SES an average of 9.2 years, as shown in table 7. Table 7 also shows that women stayed in the SES longer than men; women who voluntarily retired stayed, on average, for 11.4 years, and men who voluntarily retired stayed, on average, for 8.8 years. The average length of service among minorities ranged from 4.1 years for Asian/Pacific Islander women to 12 years for American Indian/Alaska Native men. The average number of years in the SES does not include those appointed to the SES in 1990 who, as of September 30, 2007, died (10); took other types of retirement, such as disability or mandatory retirement (17); or were terminated (1). As shown in table 8, as of September 2007, about one-third of the 614 individuals we identified who were appointed to the career SES in 1990 remained in the SES. More women from the original cohort remained than men. We also reviewed the representation of career SES members who reported having targeted disabilities. EEOC reported that it first officially recognized the term targeted disabilities in its Management Directive 703, which was approved on December 6, 1979. In its report, EEOC stated that some individuals with disabilities are reluctant to self-identify their disability status because they are concerned that (1) such disclosure will preclude them from employment or advancement or subject them to discrimination and (2) their disability status will not remain confidential. It is not clear the extent to which individuals with disabilities do not identify or report them. Governmentwide, the representation of career SES members reporting targeted disabilities declined from 0.52 in fiscal year 2000 to 0.44 in fiscal year 2007. Table 9 shows the representation of SES members with targeted disabilities governmentwide and within the CFO Act agencies. In both 2000 and 2007, half of the CFO Act agencies (12) did not employ any SES members with targeted disabilities. Executive branch agencies have processes for selecting members into the career SES and developmental programs that are designed to create pools of candidates for senior positions. Federal executive agencies are to follow competitive merit staffing requirements for initial career appointments to the SES or for appointment to formal SES candidate development programs, which are competitive programs designed to create pools of candidates for SES positions. Each agency head is to appoint one or more Executive Resources Boards (ERB) to conduct the merit staffing process for initial SES career appointments. ERBs review the executive and technical qualifications of each eligible candidate and make written recommendations to the appointing official concerning the candidates. The appointing official selects from among those candidates identified by the ERB as best qualified and certifies the executive and technical qualifications of those candidates selected. Candidates who are selected must have their executive qualifications certified by an OPM- administered Qualifications Review Board (QRB) before being appointed to the SES. According to OPM, it convenes weekly QRBs to review the applications of candidates for initial career appointment to the SES. QRBs are independent boards of three senior executives that assess the executive qualifications of all new SES candidates. At least two of the three QRB members must be career appointees. In addition, OPM guidance states that QRB members cannot review candidates from their own agencies. An OPM official stated that an OPM official acts as administrator, attending each QRB to answer questions, moderate, and offer technical guidance but does not vote or influence voting. OPM guidance states that the QRB does not rate, rank, or compare a candidate’s qualifications against those of other candidates. Instead, QRB members judge the overall scope, quality, and depth of a candidate’s executive qualifications within the context of five executive core qualifications—leading change, leading people, results driven, business acumen, and building coalitions—to certify that the candidate’s demonstrated experience meets the executive core qualifications. To staff QRBs, an OPM official said that OPM sends a quarterly letter to the heads of agencies’ human capital offices seeking volunteers for specific QRBs and encourages agencies to identify women and minority participants. Agencies then inform OPM of scheduled QRB participants, without a stipulation as to the profession of the participants. OPM solicits agencies once a year for an assigned quarter and requests QRB members on a proportional basis. The OPM official said that OPM uses a rotating schedule, so that the same agencies are not contacted each quarter. Although QRBs generally meet weekly, an OPM official said that QRBs can meet more than once a week, depending on case loads. The official said that because of the case load of recruitment for SES positions recently, OPM had been convening a second “ad hoc” QRB. According to another OPM official, after QRB certification, candidates are officially approved and can be placed. In addition to certification based on demonstrated executive experience and another form of certification based on special or unique qualities, OPM regulations permit the certification of the executive qualifications of graduates of candidate development programs by a QRB and selection for the SES without further competition. OPM regulations state that for agency candidate development programs, agencies must have a written policy describing how their programs will operate and must have OPM approval before conducting them. According to OPM, candidate development programs typically run from 18 to 24 months and are open to GS-15s and GS-14s or employees at equivalent levels from within or outside the federal government. Agencies are to use merit staffing procedures to select participants for their programs, and most program vacancies are announced governmentwide or to all sources. OPM regulations provide that candidates who compete governmentwide for participation in a candidate development program, successfully complete the program, and obtain QRB certification are eligible for noncompetitive appointment to the SES. OPM guidance states that candidate development program graduates are not guaranteed placement in the SES. Agencies’ ERB chairs must certify that candidates have successfully completed all program activities, and OPM staff review candidate packages to verify that regulatory requirements have been met. An “ad hoc” QRB then reviews the candidates’ training and development and work experiences to ensure he or she possesses the required executive qualifications. OPM also periodically sponsors a centrally administered federal candidate development program. According to an OPM official, the OPM-sponsored federal candidate development program can be attractive to smaller agencies that may not have their own candidate development program, and OPM administers the federal program for them. According to OPM officials, from the first OPM-sponsored federal candidate development program, 12 graduated in September 2006. Of those, 9 individuals were placed in SES positions within 1 year of graduating from the program. In January 2008, OPM advertised the second OPM-sponsored federal candidate development program but subsequently suspended the program. In June 2008, OPM re-advertised the second OPM-sponsored federal candidate development program, and 18 candidates were selected for the program and have started their 12-month training and development program. We provided the Acting Director of OPM and the Chair of EEOC with a draft of this report for their review and comment. OPM provided technical comments via e-mail, which we incorporated as appropriate, but did not otherwise comment on the report. In an e-mail, EEOC said it had no comments. We are sending copies of this report to the Acting Director of OPM, the Chair of EEOC, and other interested congressional parties. We also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-9490 or stalcupg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Demographic Profiles of Career SES, GS-15, and GS-14 Employees Governmentwide and at the 24 Chief Financial Officers Act Agencies Equal employment opportunity (EEO) group 5.8 2.8 1.3 56.5 22.9 0.1 100.0 20.5 68.6 12.1 We included GS-15, GS-14, and equivalent employees. GS-equivalent employees are those in equivalent grades under other pay plans that follow the GS grade structure and job evaluation methodology or are equivalent by statute. Data on the SES and the SES developmental pool for 2000 in this report differ from prior GAO products. We first identified SES and SES developmental pool data for 2000 in our 2003 report (GAO-03-34), in which we excluded the FBI from the SES and the SES developmental pool because that report contained projected SES and the SES developmental pool levels for the end of fiscal year 2007 based on separation and appointment data, and the FBI did not submit separation and appointment data to the CPDF for 2000. We subsequently cited data on the SES and SES developmental pool for 2000 from that report in four additional products (GAO-04-123T, GAO-07-838T, GAO-08-609T, and GAO-08-725T). Data on the SES and the SES developmental pool for 2007 include the FBI. 5.7 2.9 0.6 63.4 19.7 0.0 100.0 16.9 75.0 11.6 9.3 22.7 3.0 0.9 52.6 28.6 0.0 100.0 18.8 62.0 9.4 19.4 5.0 1.8 0.9 59.8 20.0 0.0 100.0 20.2 72.9 13.1 11.7 2.8 2.1 0.5 73.7 14.6 0.1 100.00 11.7 82.5 8.8 5.4 9.1 1.5 1.1 39.5 37.3 0.0 100.0 23.2 49.4 9.9 22.8 4.8 2.1 1.1 62.3 21.2 0.1 100.0 16.5 72.9 10.5 9.9 6.5 2.0 1.7 52.2 31.4 0.4 100.0 16.0 59.4 6.9 13.1 .8 1.6 1.1 51.1 28.2 0.5 100.0 20.3 60.6 9.0 15.0 9.6 1.7 1.3 43.0 36.7 0.2 100.0 20.2 53.0 10.0 16.1 3.1 The Department of Homeland Security did not exist before March 2003. Its creation united 22 agencies or parts of agencies, including the U.S. Customs Service, which was formerly located in the Department of the Treasury; the Federal Emergency Management Agency; and the Coast Guard. 24.7 3.0 2.2 38.6 20.3 0.2 100.0 40.9 57.0 18.4 28.1 9.0 1.5 0.4 62.7 21.1 0.3 100.0 16.0 72.7 9.8 8.6 4.0 3.6 1.5 54.3 27.7 0.1 100.0 17.9 64.7 10.3 The data on Justice for 2000 in this report differ from such data in prior GAO products. We first identified Justice SES and GS-15 and GS-14 data for 2000 in our 2003 report (GAO-03-34), in which we excluded the FBI from Justice data because that report contained projected SES and SES developmental pool levels for the end of fiscal year 2007 based on separation and appointment data, and the FBI did not submit separation and appointment data to the CPDF for 2000. We subsequently cited 2000 data from that report in four additional products (GAO-04-123T, GAO-07-838T, GAO-08-609T, GAO-08-725T). The data on Justice for 2007 include the FBI. 9.8 2.0 1.5 51.3 30.2 0.0 100.0 18.5 60.4 9.1 15.7 5.3 3.6 0.9 65.4 17.2 0.2 100.0 17.2 78.1 12.5 6.9 4.1 1.9 0.3 60.6 18.5 0.0 100.0 20.9 76.3 15.8 9.3 .1 0.0 0.0 40.2 41.5 0.0 100.0 18.3 46.3 6.1 14.8 .1 2.6 2.0 47.4 29.6 0.0 100.0 23.0 59.2 11.8 25.9 11.1 5.7 3.1 49.7 18.1 0.0 100.0 32.1 66.3 16.6 20.7 13.4 3.4 1.4 42.2 31.5 0.0 100.0 26.3 52.6 10.4 19.0 .9 1.0 1.2 52.3 32.6 0.9 100.0 14.2 58.7 5.8 The number of GS-15s, GS-14s and equivalents decreased because the Department of State stopped reporting data on Foreign Service employees to the Office of Personnel Management’s Central Personnel Data File in fiscal year 2006. In addition to the individual named above, Kiki Theodoropoulos, Assistant Director; Clifton Douglas, Jr.; Jessica Drucker; Karin Fangman; Kirsten B. Lauber; Mary Martin; Michael R. Volpe; and Gregory H. Wilmoth made key contributions to this report.
A diverse Senior Executive Service (SES), which generally represents the most experienced segment of the federal workforce, can be an organizational strength by bringing a wider variety of perspectives and approaches to policy development and implementation, strategic planning, problem solving, and decision making. In a January 2003 report (GAO-03-34), GAO provided data on career SES members by race, ethnicity, and gender as of October 2000 and a statistically estimated projection of what the profile of the SES would be in October 2007 if appointment and separation trends did not change. In response to a request for updated information on the diversity in the SES, GAO is providing information from the Office of Personnel Management's (OPM) Central Personnel Data File (1) on the representation of women and minorities in the SES and the SES developmental pool (i.e., GS-15 and GS-14 positions) for the executive branch as of fiscal year 2007 and comparing this representation to fiscal year 2000 levels and to levels GAO projected for October 2007 in its 2003 report; (2) for fiscal years 2000 and 2007, the average age at which women and minorities were appointed to and retired from the SES as well as information on those in the SES reporting targeted disabilities; and (3) on the overall processes used in executive branch agencies for selecting and certifying members into the SES. The representation of women and minorities in the SES and the SES developmental pool increased governmentwide from October 2000 through September 2007, but increases did not occur in all agencies. Over these 7 years, increases occurred in more than half of the 24 major executive branch agencies, but in both 2000 and 2007 the representation of women and minorities continued to vary significantly at those agencies. In 2003, we projected that increases would occur in the representation of women and minorities in the SES and SES developmental pool by 2007. These increases generally did occur. Looking beyond racial, ethnic, and gender profiles, GAO also reviewed the average age at appointment to and retirement from the career SES as well as the disability status reported by career SES employees for fiscal years 2000 and 2007. For the most part, career SES members were, on average, about age 50 at the time of their appointment to the SES and about age 60 at the time of their retirement. The average age at appointment to and retirement from the career SES generally did not vary much by race, ethnicity, or gender. GAO also calculated how long, on average, individuals served in the SES, and found that the length of their stay in the SES did vary. For example, women stayed in the SES longer than men; women who voluntarily retired stayed, on average, for 11.4 years, and men who voluntarily retired stayed, on average, for 8.8 years. The average length of service among minorities ranged from 4.1 years for Asian/Pacific Islander women to 12 years for American Indian/Alaska Native men. Governmentwide less than 1 percent of the career SES in 2000 and 2007 had self-reported targeted disabilities, and their representation declined slightly over this time. Executive branch agencies have established processes for selecting members into the SES and have developmental programs that are designed to create pools of candidates from which new members can be selected. These agencies use Executive Resources Boards to review the executive and technical qualifications of eligible candidates for initial SES career appointments and make recommendations based on the best qualified. An OPM-administered board reviews candidates' qualifications before appointment to the SES.
This section provides information on (1) the known health effects of lead in drinking water; (2) how water systems deliver drinking water to the public and where lead may be present; (3) the requirements of the LCR; (4) LCR data that states report to EPA; and (5) the roles of federal, state, and local entities in implementing the LCR. EPA, the Centers for Disease Control and Prevention (CDC), and others have indicated that the rates of lead contamination in the U.S. population have decreased over the years. However, lead remains a significant concern to public health because lead is persistent and can accumulate in the body over time with long-lasting effects, particularly for children and pregnant women. According to EPA documents, low levels of lead exposure in children are linked to hyperactivity, anemia, lower intelligence quotient (IQ), physical and learning disabilities, and slowed growth. In pregnant women, lead can store in bones and be released as maternal calcium used to form the bones of the fetus, reduce fetal growth, and increase risk of miscarriage and stillbirth. For adults, lead can have detrimental effects on cardiovascular, renal, and reproductive systems; and, it can prompt memory loss. The presence of lead in the bloodstream can disappear relatively quickly, but bones can retain the toxin for decades. According to the National Institutes of Health and CDC documents, medications can remove some lead from the body but cannot undo the damage lead causes, although additional services may mitigate some of the damage. Recognizing that vigilance and collaboration are necessary to ensure that children negatively affected by lead exposure receive services designed to compensate for lead’s effect on the brain, and behavior of children, some medical experts promote early-childhood intervention, education, and other programs. According to CDC documents, early intervention for children can help improve IQ scores, academic readiness, and language development as well as decrease placement in special education classes. For these reasons, EPA and others recommend the prevention of lead exposure before it occurs. Water systems depend on distribution systems, both simple and complex, composed of interconnected components to deliver drinking water from a source to their customers. Source water can be either surface (streams, rivers, and lakes) or ground (aquifers). As figure 1 illustrates, the distribution system used to deliver water from the source can include a network of pipes and other components. A distribution system comprises water towers, pipes, pumps, and other components to deliver treated water from treatment systems to consumers. Particularly among larger water systems, distribution systems may contain thousands of miles of pipes, including water mains. There are 1 million miles of drinking water mains in the country, according to a 2017 American Society of Civil Engineers study. Service lines are the smaller pipes that connect the water mains to homes and buildings and can also include smaller pipes used for connecting a service line to the water mains (e.g., called pigtail and gooseneck pipes). In contrast to most other drinking water contaminants, lead is rarely found in the source water. More commonly, lead enters drinking water after the water comes into contact with water mains; service lines; smaller pipes that connect the two; and other plumbing materials that contain lead, such as faucets and water coolers. Schools and day care centers with their own water supplies generally rely on well-water systems using groundwater to deliver drinking water. According to the 2017 American Society of Civil Engineers study and EPA documents, communities, both urban and rural, have aging and deteriorating drinking water infrastructure, which, according to EPA documents, can contribute to lead hazards in drinking water. Since the early 1970s, when several medical studies confirmed that lead exposure negatively impacts health, measures have been taken to reduce the public’s exposure to lead in drinking water, including the enactment of amendments to the SDWA in 1986 and 1996, the enactment of the Lead Contamination Control Act in 1988, the issuance of the LCR in 1991, and amendments to state building codes prohibiting the use of lead pipes. The LCR generally requires water systems to minimize lead in drinking water by controlling the corrosion of metals in the infrastructure they use to deliver water and in household plumbing. EPA has stated that the LCR is one of the most complicated drinking water regulations for states to implement because of the need to control the corrosion of pipes and plumbing fixtures as water is delivered to consumers. The corrosion of pipes results from a chemical interaction between water and pipes that wears the metal away and allows particles of metal to flake away over time. All large water systems (serving populations larger than 50,000) are generally required to install corrosion control treatment. While the majority of the U.S. population receives its drinking water from medium and large water systems, most water systems are small. Characteristics of water can affect the occurrence and rate of corrosion. For example, corrosion occurs more frequently in soft water—water with low concentrations of calcium and magnesium—and also in acidic water, or water with low pH. Water systems control corrosion by adjusting the pH and alkalinity of water or by adding corrosion inhibitors. The LCR establishes corrosion control as the required treatment technique for large water systems and, for medium and small systems, the required treatment technique when the federal lead action level is exceeded (also known as an action level exceedance). Lead concentrations exceeding an action level of 15 parts per billion, or 0.015 milligrams per liter (mg/L), in over 10 percent of tap water samples (i.e., the 90th percentile level) are an indicator that corrosion control is needed or is not working correctly. A water system’s 90th percentile sample result does not exceed the lead action level if it is equal to or less than 15 parts per billion. As figure 2 illustrates, the LCR also requires water systems to identify locations where lead may be present and periodically obtain tap water samples from those locations (of which single-family homes are the highest priority). Under the LCR, an action level exceedance requires the water system and state to take a number of additional steps. Those additional steps require that small and medium water systems install or modify corrosion control treatment, and water systems of all sizes provide information (known as public education) about the harmful effects of lead to consumers and vulnerable populations (e.g., schools, if the water system serves a school, and public health departments). Water systems are also required to test and, if necessary, treat the source water. If, after installing corrosion control and treating source water, a system continues to have 90th percentile sample results that exceed the lead action level, the LCR requires the water system to begin replacing lead service lines, if they exist. In most communities, lead service lines are partially owned by the water system and partially owned by the homeowner. The LCR allows for a partial replacement when an owner of a home or building is unable or unwilling to pay for replacement of the portion of the service line not owned by the water system. In an October 2016 study, EPA noted that sample requirements under the LCR are complex for many reasons, one reason being that it is the only drinking water regulation in which homeowners or consumers collect the drinking water samples. Water systems are in compliance with the LCR when they follow the various federal requirements for collecting samples, reporting, installing treatments, providing public education, and replacing lead service lines; as well as when they follow any state requirements that are more stringent than the federal requirements. States and EPA can take several different types of enforcement actions when water systems fail to complete requirements in these areas. Sample results that exceed the lead action level do not by themselves constitute violations of the LCR. The SDWA, as amended in 1996, requires EPA to review and revise, as appropriate, each national primary drinking water regulation, including the LCR, at least once every 6 years. The 1991 LCR was revised in 2000 and 2007. EPA initiated an extensive review of the LCR in 2004 after widespread increases in lead levels were detected in the District of Columbia’s water following a water treatment change. EPA promulgated short-term revisions and clarifications in 2007 and has continued working on comprehensive revisions. In 2016, the agency announced that it would revise the LCR and issue proposed revisions in 2017 and a final revised rule in 2019. EPA also released a Lead and Copper Rule Revisions White Paper in 2016 that outlined potential elements of the rule under consideration for revision such as use of corrosion control practices, requirements for collecting samples, and lead service line replacement. The LCR generally requires that water systems submit data to states to demonstrate their compliance with the treatment technique required by the rule. The LCR also requires states to submit some of these data to EPA’s SDWIS/Fed database on a quarterly basis. Specifically, states are required to submit the following data to EPA: for large and medium water systems, all 90th percentile sample results (i.e., sample results that meet, fall below, and exceed the lead action level); for small water systems, 90th percentile sample results that exceed the lead action level; on water systems that have been designated as having achieved corrosion control because the state has determined that the source water is minimally corrosive; on water systems that were required to install corrosion control treatment, source water treatment, and lead service line replacement and have completed the applicable requirements as a result of having sample results exceed the lead action level; on water systems that have begun the process of replacing lead on water systems that have new violations of the LCR; and on enforcement actions taken in response to violations of the LCR. For corrosion control, the LCR requires the states to report what EPA refers to as “milestone” data to the SDWIS/Fed database: data on the status of required actions, such as installing corrosion control treatment, as required, after reporting sample results that exceed the lead action level; and data on those water systems deemed to have corrosion already under control, such as when the water is minimally corrosive. The states collect and manage relevant data (including violations and enforcement information) in either a database provided by EPA—known as the Safe Drinking Water Information System/State—or in a data system of their own design. States must then transfer the data from one of those databases into SDWIS/Fed. In 2010, EPA announced that it would redesign SDWIS/Fed. We reported in June 2011 that EPA officials expected this redesign of SDWIS/Fed to expand the amount of data that EPA receives electronically from states. The name of the redesigned database is SDWIS Prime, which according to EPA officials, is expected to be complete by 2018. Generally, the responsibility for reducing lead in drinking water and ensuring safe drinking water overall, is shared by EPA, states, and, local water systems. As shown in figure 3, EPA is responsible for national implementation of the LCR and setting standards; overseeing states’ implementation of the LCR; providing infrastructure funding, training, and technical assistance to states and water systems; and conducting some enforcement activities. However, the primary responsibility for ensuring that drinking water is free of lead resides with states and local water systems. Generally, states with primary enforcement responsibility initiate enforcement actions against water systems that do not comply with the LCR and other drinking water regulations. However, EPA can also issue orders necessary to protect human health where a contaminant in a public water system presents an imminent and substantial endangerment. According to a 2013 EPA drinking water compliance report, states generally implement and enforce the LCR, and other drinking water regulations, in the following ways: provide technical assistance through such actions as offering training, holding public information meetings, and lending monitoring equipment; take informal actions such as field visits, reminder letters, telephone calls, and notices of violation; and take formal actions such as issuing citations, administrative orders with or without penalties, civil and criminal cases, and emergency orders. Since 2009, according to an EPA document, the agency’s enforcement strategy, in collaboration with states, has focused on identifying water systems with a history of violations across multiple drinking water rules for enforcement actions in states, territories, and tribal regions. To facilitate this strategy, EPA’s headquarters staff are to review data on violations in EPA’s SDWIS/Fed using an Enforcement Targeting Tool to identify systems that merit action by states based on the seriousness of their violations. EPA staff also are to use these data to determine whether water systems are achieving the agency’s national targets for compliance. According to the EPA FY 2014-2018 Strategic Plan, the agency’s goal is for 92 percent of water systems that provide drinking water year-round to meet all applicable health-based drinking water standards by 2018. The available EPA data show sample results, use of corrosion control, violations, and enforcement actions taken for the 68,000 water systems from July 2011 to December 2016, but data are not complete. The available data reported by states in EPA’s SDWIS/Fed database show at least 2 percent of drinking water systems with sample results exceeding the lead action level (from 2014 to 2016), and at least 10 percent of water systems being out of compliance with the LCR (i.e., having at least one reported violation) as of December 31, 2016. In addition, the state- reported data in SDWIS/Fed show 99 percent of enforcement actions were taken by states, as expected because states generally have primary responsibility for monitoring and enforcement of the SDWA requirements, including the LCR. According to recent EPA assessments, the EPA OIG report, and our January 2006 and June 2011 reports, some of the data in the SDWIS/Fed database are not complete. Specifically, the data are underreported, and therefore, data available in SDWIS/Fed likely understate the number of sample results, violations, and enforcement actions that actually occurred. In addition, the available EPA data on water systems’ use of corrosion control are not complete. We also found that because the LCR does not require states to submit certain data to EPA, EPA’s SDWIS/Fed database does not contain data on key parts of the rule, such as the presence or location of lead pipes—information that water systems use to identify the locations from which they will draw tap samples—or complete sample results for small water systems. EPA’s SDWIS/Fed database contains descriptive data on, for example, drinking water sample results, corrosion control, violations, and enforcement actions, as required by the LCR. The available state-reported data in EPA’s SDWIS/Fed database show that of the approximately 68,000 drinking water systems subject to the LCR, at least 1,430 water systems (2 percent) had 90th percentile sample results that exceeded the lead action level of 15 parts per billion from 2014 to 2016. EPA officials told us that they analyze these sample data over a 3-year period rather than yearly to ensure that the majority of water systems will have submitted sample results. These 1,430 systems serve a population of approximately 3 million people. Of the 1,430 systems with sample results exceeding the lead action level, 258 (18 percent) were schools and day care centers with their own water supplies. As we reported in January 2006, the LCR sample data in SDWIS/Fed were underreported; recent EPA file reviews in selected states found that sample data were not always reported to SDWIS/Fed; and a 2017 EPA Office of Inspector General report indicated that sample data, specifically, are potentially underreported. Appendix II provides additional information about the available EPA data on sample results, reported violations, and enforcement. In addition, some state regulators with whom we interviewed in 2016 told us that homeowners and water systems may take LCR samples improperly as we discuss later in this report. See appendix III for these state regulators’ views on challenges associated with waters systems’ implementation of the sample requirements under the LCR. The 2015 Report of the Lead and Copper Working Group to the National Drinking Water Advisory Council noted the importance of corrosion control because it is intended to achieve a water quality that minimizes lead in water. Our analysis of the available state-reported data in EPA’s SDWIS/Fed database on corrosion control from July 2011 to December 2016, shows that the database contained milestone data for 904 water systems on the status of required actions about corrosion control treatment after a sample result exceeded the lead action level, and 1,479 water systems were deemed to have corrosion already under control. In addition, 34 water systems had milestone data in SDWIS/Fed for lead service line replacement. For the approximately 68,000 water systems subject to the LCR, 1,665 systems, had milestone data in SDWIS/Fed, or about 2 percent of all water systems from July 2011 to December 2016. According to EPA officials, when including milestone data available prior to July 1, 2011, almost half of these systems have submitted the required information regarding corrosion control milestones. Each water system can have up to three types of milestone data (i.e., status of required actions about corrosion control, systems deemed to have corrosion already under control, and lead service line replacement) in SDWIS/Fed. In June 2017, EPA officials said that all water systems subject to the LCR are expected to have data on corrosion control in SDWIS/Fed. However, these officials also said that states may not report the data to SDWIS/Fed because of technical limitations with some state databases and confusion among some state officials about how to report the data to SDWIS/Fed. Of the 983 large systems in the SDWIS/Fed database, milestone data were available for 13 from July 2011 to December 2016, and 5 of those systems had sample results exceeding the lead action level at some point over that time period. Of the small and medium water systems for this period that installed corrosion control treatment because their sample results exceeded the lead action level, milestone data were available for 884 water systems. We reported in January 2006 that EPA did not have complete milestone data, including data on corrosion control. Specifically, we reported that EPA had, at that time, collected milestone data for about 28 percent of water systems. At the time of our 2006 report, EPA officials told us that in most instances water systems should have data on corrosion control treatment and that it was more likely the case that states were not reporting the data rather than a case of noncompliance by water systems. We recommended that EPA ensure that data on water systems’ test results, corrective action milestones, and violations were current, accurate, and complete. EPA generally agreed with our recommendation, but has not fully implemented it. In 2016, EPA highlighted its response to our January 2006 recommendation through such efforts as having staff review SDWIS/Fed data for accuracy and timeliness and promoting electronic reporting of the drinking water data states submit to SDWIS/Fed. In addition, EPA headquarters officials said in June 2017 that the agency also worked with the states on reporting corrosion control data by conducting webinars and in-person training that included information about reporting data to SDWIS/Fed. For example, EPA conducted a three-part series of LCR 101 webinars. EPA officials said that the webinars in this series reached over 1,600 attendees with individual webinars ranging from 227 to 551 viewers. EPA’s efforts regarding training sound promising, but it may be too early to see the impact of these efforts to work with states on reporting milestone data on corrosion control to SDWIS/Fed. We continue to believe that EPA should take steps to ensure that data, including those on milestones, are current, accurate, and complete. The available data reported by states in EPA’s SDWIS/Fed database show that of the approximately 68,000 drinking water systems subject to the LCR, states reported that at least 6,567 water systems (about 10 percent) had at least one reported open violation of the LCR as of December 2016. In total, these 6,567 water systems had a total of at least 12,884 open violations as of December 2016. As we reported in January 2006 and June 2011, the violations data in SDWIS/Fed were underreported. Recent EPA file reviews in selected states found that some violations data were not reported to SDWIS/Fed. LCR violations fall into two categories: (1) monitoring and reporting and (2) treatment technique. Monitoring and reporting violations generally refer to a water system failing to collect samples of drinking water from the tap, within the distribution system, and from source water and failing to report sample results to the states. Treatment technique violations, which EPA considers to be health-based violations, generally refer to a water system failing to take actions as required after water samples exceed the federal lead action level. The two most frequent violations were for not following requirements for (1) monitoring and reporting routine follow-up and (2) initial tap sampling. Taking samples from homes is the only way that water systems, states, and ultimately EPA can obtain the indicators needed to determine whether corrosion control treatment is needed or if corrosion control treatments already installed are working, in addition to other treatment technique requirements. The third most frequent violation was lead consumer notification, which states or water systems are to do in writing, about the results of the samples taken from homes or buildings they occupy regardless of the presence of lead in the samples taken, known as lead consumer notice violations. These notifications are to provide consumers with information about their drinking water sample results so that they can determine what actions to take to reduce their exposure to lead if lead is present. Of the approximately 68,000 water systems subject to the LCR, approximately 7,000 schools and daycare centers make up about 10 percent. As their missions would indicate, these schools and daycare centers provide drinking water to children, one of the populations most at- risk for adverse health effects from even small amounts of lead. Most of the schools and daycare centers in the EPA data we analyzed were classified as small water systems. EPA data show that schools and daycare centers comprise about 10 percent (664 water systems) of the 6,567 water systems with at least one open violation of the LCR as of December 31, 2016. Much like the overall group of water systems, schools and daycare centers were most frequently violating the LCR requirements for not (1) monitoring and reporting routine follow-up (2) initial tap sampling, and (3) lead consumer notification. The available data reported by states in the SDWIS/Fed database show reported information on the enforcement actions taken by states and EPA against water systems that have violated requirements of the LCR. States reported taking 98 percent of the enforcement actions from July 1, 2011, to December 31, 2016, as would be expected given that states generally have primary responsibility for enforcement of the LCR. In our January 2006 report, we found that because sample results, milestones, and violations data for the LCR in SDWIS/Fed were underreported, it was difficult to assess the adequacy of enforcement. We then found in June 2011 that the enforcement data, generally, in SDWIS/Fed were incomplete. States and EPA can take a range of enforcement actions both formal and informal. Formal enforcement actions include issuing state administrative orders with or without penalties, filing state or federal civil and criminal cases, and issuing emergency orders. Informal enforcement actions include reminder notices of a violation, formal notices of violation, public notification requests, and state referrals of cases to EPA. According to a 2013 EPA compliance report, the number of enforcement actions in a year does not necessarily correlate with the number of violations that are reported in the same year. The two most frequently reported enforcement actions taken were informalstate violation/reminder notice, which inform water systems that the system has open violations, and state public notification requested, in which the state requests a copy of the information water systems sent to homeowners. Most of the EPA officials we interviewed in the 10 regional offices told us that states primarily rely on informal actions and technical assistance and training because they are the most effective means of getting water systems to comply with regulations. The LCR does not require states to submit data to EPA’s SDWIS/Fed database on (1) location of lead pipes or (2) all sample results for small water systems. As a result, EPA does not have available data on either the location of lead pipes or complete sample results for small water systems. Water systems were required to collect information on the presence of lead pipes when the LCR was promulgated in 1991, but there is currently no requirement that this information be reported to EPA. States are to submit to SDWIS/Fed on a quarterly basis all 90th percentile sample results for large and medium water systems (including those that exceed the lead action level). However, for small water systems, states are required to submit data to SDWIS/Fed only for those 90th percentile sample results that exceed the lead action level. As a result, sample results for small water systems are not complete in SDWIS/Fed. When the LCR was promulgated in 1991, all drinking water systems were required to collect information about the infrastructure that delivered water to customers, including any known lead pipes and lead service lines. The purpose of this effort, referred to as a materials evaluation, was to identify locations that may have been particularly susceptible to high lead or copper concentrations, which would become the pool of targeted sample sites. Water systems that must replace their lead service lines under the LCR also must report their materials evaluations to their respective states. In addition, a 1980 EPA regulation required community water systems to identify, among other things, whether lead from piping, solder, caulking, interior lining of distribution mains, alloys, and home plumbing was present in their distribution system and report this information to the state. However, the LCR does not require states to report information on known lead pipes and service lines to EPA’s SDWIS/Fed database. As a result, the agency may not have information at the national level about the lead infrastructure in the country. In February 2016, in light of the events in Flint, Michigan, and other U.S. cities, EPA asked states to collect information about the locations of lead service lines and publish the information on local or state websites to better inform the public. In a July 2016 letter to the Environmental Council of States and the Association of State and Territorial Health Officials, EPA noted that some states had successfully taken action to fulfill the request, citing (1) water systems with online searchable databases that provide information on lead service lines and (2) several states that were requiring water systems to update their inventories of lead service lines. In the letter, EPA also noted that many states identified challenges in identifying lead service lines but that improving knowledge of lead service lines is important to ensure that water systems are (1) collecting drinking water samples from valid high-risk locations, as required under the LCR, (2) managing the risks associated with disruption of lead service lines, and (3) providing information to customers on how to assess and mitigate risks posed by lead. In written responses to EPA’s letter, most (37) of the 50 states (or primacy agencies) indicated that they had fulfilled or intended to fulfill EPA’s request to work with water systems to collect and make public information about lead pipes. Four states indicated that they were considering EPA’s request. However, 9 states indicated that they would not or did not intend to fulfill EPA’s request because of challenges in finding the historical documentation about lead pipes used to create original sample plans or dedicating staff resources to do so. In addition, in their responses to EPA’s letter, 13 states noted that the LCR does not require states to maintain information about water systems’ lead pipes or to provide the information to the public. EPA stated in its 2016 Lead and Copper Rule Revisions White Paper that it was considering a proposal in the upcoming revision to the LCR for water systems to update their information on lead service lines and share the results of their “materials evaluation.” In June 2017, EPA headquarters officials said that the agency was evaluating all options outlined in its 2016 white paper as well as recommendations related to lead pipes by other stakeholders. According to EPA technical guidance on corrosion control, knowledge about lead service lines is needed for studies of corrosion control treatments. In addition, the National Drinking Water Advisory Council stated in its 2015 final report that knowledge about the location of lead service lines is essential to ensuring replacement and outreach to customers who are most likely to have a lead service line. We reported in March 2013 that, as the nation faces limited budgets and funding for federal programs, the importance of targeting federal funds to communities with the greatest need and spending funds efficiently increases. For example, the Water Infrastructure Improvements for the Nation Act, enacted in December 2016, directs EPA to establish a grant program for reducing the lead in drinking water by, among other things, replacing publicly owned lead service lines and assisting homeowners with replacing the lead service lines on their property. In addition, EPA’s 2016 action plan identifies the reduction of lead risks as a priority area. By requiring, in the upcoming revision of the LCR, that states report the available information about lead pipes in its SDWIS/Fed (or in future redesigns, such as SDWIS Prime) database, EPA and congressional decision makers would have important information at the national level on what is known about lead infrastructure in the country, thereby facilitating the agency in its oversight role. In a 2016 report on how science and technology can address drinking water challenges, the President’s Council of Advisors on Science and Technology stated that, sample data are essential for evaluating the performance of a drinking water system. While the LCR requires small water systems to report all 90th percentile sample results (i.e., results that meet, fall below, and exceed the lead action level) to the states, it does not require the states to report all of this information to EPA through the SDWIS/Fed database. EPA headquarters officials said that the agency had not required states to submit the results for all small systems due to the reporting burden on states. According to EPA’s reporting guidance for states, however, reporting all sample results to the SDWIS/Fed database for small water systems that do not exceed the lead action level is encouraged and will be accepted. EPA officials told us that SDWIS/Fed contained complete sample results for about 20,000 of the approximately 58,000 small water systems, or about 30 percent, of the 68,000 water systems. Officials we interviewed in 1 of EPA’s 10 regional offices said that the lack of all 90th percentile sample results for small systems prevents the agency from observing such systems in SDWIS/Fed. In June 2017, EPA headquarters officials said that having all 90th percentile sample results for small systems would give the agency a more complete national picture of lead in drinking water. According to information on EPA’s website, small water systems can face unique managerial, financial and operational challenges in consistently providing drinking water that meets EPA standards and requirements. In 2016, EPA’s Office of Inspector General reported that small water systems are less likely to have the technical, managerial, and financial capacity to conduct actions that would ensure safe drinking water. The SDWA requires that EPA assist states in ensuring that water systems acquire and maintain technical, managerial, and financial capacity. In addition, the SDWA also authorizes EPA to provide technical assistance to small public water systems to enable such systems to achieve and maintain compliance with applicable national primary drinking water regulations, including the LCR. Because it does not have complete 90th percentile sample results on small water systems, EPA does not have information on how such systems are managing the reduction of lead in their drinking water. Small systems represent the majority of water systems reporting samples that have exceeded the lead action level, but states are not required to submit all 90th percentile sample results for small systems in the SDWIS/Fed database; this would require a revision to EPA’s regulations. By requiring, in the upcoming LCR revision, that states report all 90th percentile sample results for small systems in the SDWIS/Fed database, EPA would have data to track the changes in lead levels over time among small systems and would be better positioned to assist states in early intervention for small water systems that are near the lead action level where appropriate. In June 2017, EPA officials said that as states move toward more modernized data flows using electronic reporting and SDWIS Prime, the burden for reporting should be significantly lowered. EPA officials said that they analyze data in their SDWIS/Fed database and meet quarterly with state regulators to monitor compliance across all drinking water rules and that, in the last year, in response to the events in Flint, Michigan, they have increased their use of these data to monitor compliance and address implementation of the LCR. EPA applies its Enforcement Targeting Tool to the violations data associated with the more than 90 drinking water contaminants regulated under SDWA for the purpose of identifying systems that merit action by states based on the seriousness of their violations. Specifically, the Enforcement Targeting Tool assigns a score to each water system based on, among other criteria, the types of violations and number of unresolved violations over the previous 5-year period. The Enforcement Targeting Tool assigns higher scores to health-based violations, such as treatment technique violations. Water systems whose scores meet or exceed a certain threshold are given higher enforcement priority for states (and EPA, if necessary). EPA officials we interviewed in all 10 of the regional offices said that they meet quarterly with state regulators to discuss the results generated by the Enforcement Targeting Tool and generally considered it to be a success. EPA headquarters officials agreed that the Enforcement Targeting Tool was a success, even with the agency’s challenges with the SDWIS/Fed data, including using data that are not always complete and accurate. However, these officials also told us that the Enforcement Targeting Tool was not designed for and therefore would not be appropriate for monitoring compliance with any single regulation, including the LCR. In April 2017, EPA headquarters officials told us that as of January 2017, the Enforcement Targeting Tool includes information on water systems’ most recent 90th percentile sample result and the number of 90th percentile sample results exceeding the lead action level over the previous 5-year time period. EPA officials told us that they also conduct on-site file reviews of one to two states each year. File reviews involve regional staff comparing information on a sample of water systems in states’ databases with that in SDWIS/Fed to identify any discrepancies and to assess states’ compliance decisions. EPA headquarters officials told us that the agency developed a protocol for conducting file reviews and provided training on this protocol for the regions. Staff have discretion on how to prioritize the states in their regions. These file reviews cover all of the drinking water regulations, which allows them to also periodically assess how well states were implementing the LCR. According to EPA officials, in 2011, these file reviews replaced the data verification audits, which were discontinued in 2010; were designed to be generalizable to all water systems; and involved contractors comprehensively reviewing states’ water system inventories and violations and enforcement data and comparing them against the information in SDWIS/Fed. Agency officials said that the agency can no longer conduct these audits due to a lack of resources. EPA headquarters officials told us that the agency had begun using SDWIS/Fed data, in the last year, in response to the discovery of drinking water contaminated with elevated levels of lead in Flint, Michigan, as part of a two-pronged approach for reviewing states’ and water systems’ implementation of the LCR. The first part of EPA’s approach was to identify all of the water systems that reported sample results exceeding the federal action level from 2013 to 2016. EPA officials said that they requested that state officials provide updates on the status of each of the approximately 2,400 water systems identified as reporting such results. The purpose of this approach, according to EPA officials, was to determine whether the states and water systems were properly following the LCR’s requirements after a water system’s sample results exceeded the federal lead action level. In addition, the approach would allow, if necessary, states and EPA to have an opportunity for early intervention. EPA officials said that previously they had not systematically and uniformly analyzed all of the water systems in their database with sample results that exceed the federal lead action level or asked states, at any one time, to provide updates on all of the water systems with sample results exceeding the action level. Instead, EPA headquarters officials said that staff in the regional offices generally had worked with individual states on individual cases of water systems with sample results exceeding the action level as a part of the agency’s routine oversight efforts. EPA headquarters officials said that one outcome of their effort since the discovery in Flint, Michigan, was “lessons learned” about the importance of knowing where lead service lines are located and the need for states to focus more attention on small water systems and schools with their own water supplies. EPA officials we interviewed in some of the 10 regional offices said that meetings with state officials to discuss the water systems that had exceeded the lead action level had been beneficial because agency officials gained a better understanding of how states understood and implemented the requirements of the LCR. However, officials in 3 of the 10 regional offices said that they would ask states to provide these updates less frequently because of limited staff resources. EPA headquarters officials told us that an additional outcome of this approach was insight, for EPA staff, into the types of training state regulators may need about the implementation of LCR requirements. The second part of EPA’s approach, according to headquarters officials, was to review state protocols and practices against all of the requirements of the LCR to ensure that states were implementing the rule, including protocols and procedures for using corrosion control treatments. After reviewing state protocols and practices, EPA requested that states take such actions as providing information on their websites and documenting protocols and practices for greater transparency. In addition, EPA staff in the 10 regional offices conducted meetings with the state officials in their regions. Some of these EPA officials also told us that the agency determined that generally states were implementing the LCR appropriately. However, EPA identified weaknesses among states and water systems with identifying lead pipes and understanding the requirements for installing and maintaining corrosion control. In response, EPA officials told us that they updated guidance to states and water systems and offered training and written technical guidance on implementing corrosion control. Specifically, EPA officials said, they offered in- person training for state regulators in each of the 10 EPA regions on implementing the corrosion control requirements of the LCR. Through discussions with state regulators, we identified multiple factors that may contribute to water systems’ noncompliance with the LCR. To determine whether such factors were associated with a higher likelihood of having a reported violation of the LCR, we conducted a statistical analysis that calculated a system’s likelihood of a violation using selected factors, such as the size of the population served and source water, and currently available EPA data and found that incorporating multiple factors in the analysis may help identify water systems at a higher likelihood of violating the LCR. Based on our analysis of transcripts of discussion groups, state regulators representing 41 states and 1 territory identified 29 factors that may contribute to water systems’ noncompliance with the LCR. We also reviewed 31 studies and summarized the factors the authors identified. Table 1 identifies the 10 factors state regulators most frequently identified. During our discussion groups, state regulators provided examples of how these factors contributed to noncompliance with the rule. For example, regulators in 37 states said that the size of the population served by water systems may influence noncompliance with the LCR. Regulators in 28 of the 37 states said that small systems are more likely to have drinking water sample results that exceed the federal action level, to be in noncompliance, or to face challenges that may contribute to noncompliance. Regulators in 5 states explained that this may be because small systems are generally less likely to have operators with the knowledge to properly collect samples or manage corrosion control treatment. Regulators in 28 states said that the required LCR process for collecting drinking water samples to test for lead levels may contribute to noncompliance. Regulators in 19 of these 28 states said that collecting the required number of samples is a challenge for water systems that can lead to noncompliance, because homeowners are frequently not willing to collect samples or, if they agree to collect samples, often collect them improperly. For example, homeowners may sample from an infrequently used faucet (e.g., outside spigot) instead of the required drinking water tap. Regulators in 20 states also described how the type of water system can lead to noncompliance. For example, they said that water systems for which water management and treatment are not the primary missions, such as schools, mobile home parks, and other entities, have challenges complying with the LCR. These regulators also told us that the presence of multiple factors could, together, contribute to violations of the LCR. For example, a regulator in one state said that the presence of lead in the pipes, combined with corrosive water, could lead to sample results that exceed the federal lead action level for a water system. A 90th percentile sample result that exceeds the lead action level is not by itself a violation. However, if the same water system did not conduct the required corrosion control treatment study for any reason, including because it lacked the financial capacity to pay for the study, the system would be in violation of the LCR. Appendix III provides information on all of the factors that state regulators in the 41 states and 1 territory identified in our discussions as well as examples of how those factors, individually and together, may contribute to violations of the LCR. The 31 academic studies we reviewed associated certain factors with elevated concentrations of lead in public drinking water, human exposure to lead in drinking water, or violations of drinking water laws and regulations. These studies identified the potential effects of, among other factors, the presence of lead in pipes or lead solder, within the water system’s pipes; natural disturbances within drinking water pipes, such as stagnant or soft water; operator actions to address lead in drinking water, such as the use of corrosion control to decrease the presence of lead and the use of chemical treatments to decrease the presence of other contaminants that may increase the presence of lead; a water system’s capacity to address existing lead challenges, such as the size of the population it serves and whether the system is publically or privately owned; and state and local policies designed to reduce drinking water violations or human exposure to lead in water. Our interviews with state regulators and review of academic studies suggest that certain factors could indicate whether water systems are at a higher likelihood for having a reported violation of the LCR. We selected four system characteristics that were consistent with the factors reported by state regulators in discussion groups and were available in SDWIS/Fed to conduct a statistical analysis: the population served by (or size of) the drinking water system, whether the drinking water system was publicly- or privately-owned, whether the drinking water system used groundwater or surface water whether the drinking water system was classified as a community water system or a non-transient non-community water system. We also included the factor of whether a system had sample results that exceeded the lead action level. SDWIS/Fed does not include data on such factors as the presence of lead service lines or technical, managerial, and financial capacity. We were unable to develop a nationwide statistical model referred to as a logistic regression analysis. A logistic regression analysis can identify factors that are associated with a violation and can estimate a drinking water system’s likelihood of a violation based on these factors. We have previously found that regression analysis can identify entities, regulated by a federal program, that pose a higher likelihood for a particular outcome. However, during our review of the reliability of EPA’s data on violations, we could not verify that the limitations in the completeness of the data identified in our June 2011 report had been sufficiently addressed, nationwide. Specifically, in June 2011, we found that EPA had not been able, among other things, to resume the comprehensive and routine data verification audits that would provide it with current information on the completeness of the data states provide to SDWIS/Fed. As a result, in June 2011, we recommended that EPA resume data verification audits to routinely evaluate the quality of selected drinking water data on health-based and monitoring violations that the states provide to EPA. These audits should also evaluate the quality of data on the enforcement actions that states and other primacy agencies have taken to correct violations. EPA partially agreed with our recommendation and stated that it has found that data verification audits provide valuable information on data completeness but did not commit to conducting such audits beyond 2011. Instead, EPA said that until the next generation of SDWIS (SDWIS Prime) is deployed, thus enabling the agency to view compliance monitoring data and compliance determinations directly, it will consider using data verification audits to evaluate data quality. As of October 2016, EPA reported that it has not conducted another data verification audit. Because of the limitations of using SDWIS/Fed data to conduct a nationwide analysis, we sought to use such data to conduct an analysis for individual states to determine whether factors could predict the likelihood that a water system would violate the LCR. As such, we used data from Ohio and Texas to examine the potential for developing a statistical analysis to identify drinking water systems at higher likelihood of having a reported violation. EPA found few or no discrepancies between the LCR data in these state systems and in SDWIS/Fed for the time period of our statistical analysis, 2013 to 2016. The results of our analysis are not generalizable to other states. To conduct an analysis for the two states, we developed a series of logistic regression models for these states using (1) LCR violations data for Ohio and Texas in SDWIS/Fed for 2013 and 2014 and (2) the four factors for which data were available in SDWIS/Fed (size of the population served, ownership, source water, and water system type). Our models estimated the likelihood that a water system in those two states would have a reported violation of the LCR based on these factors. We found that water systems with certain factors had a higher likelihood of having a reported violation of the LCR than water systems without those factors. For example, in both states, a water system serving 100 people was more likely to have a reported violation of the LCR than a water system serving 1,000 people. In addition, systems with a previous sample result that exceeded the lead action level were more likely to have a reported violation of the LCR than systems without a previous sample result that exceeded the lead action level. We then tested the ability of our models to predict subsequent rates of having reported violations. Specifically, we compared the estimates from our models to violations that were actually reported in SDWIS/Fed in 2015 and 2016. We found that water systems that we identified as having higher likelihoods of having a reported violation, based on our models, had significantly higher rates of reported violations in 2015 and 2016. The results of our analysis indicate that multiple factors, in addition to whether a system had sample results that exceeded the lead action level, could be used to predict water systems with a higher likelihood of having a reported violation of the LCR. Our analysis suggests that a statistical analysis of EPA data could be used to identify water systems with a higher likelihood of having a reported violation of the LCR. However, we identified two key limitations, among others, based on the state of the data in SDWIS/Fed as of December 2016. The first was the quality of the data for the purposes of conducting an analysis. We could not be confident in the specific results of a nationwide or, for some states, a state-specific analysis, because we did not have the necessary assurances of the accuracy and completeness of the SDWIS/Fed data, issues about which we previously reported in January 2006 and June 2011. EPA headquarters officials told us in June 2011 and April 2016 that their upcoming SDWIS/Fed upgrade, SDWIS Prime, could give the agency direct access to state data. Having complete and accurate data for all states or a nationally representative sample of states would allow for a nationwide analysis. The second limitation was that data are not available for many of the factors identified by state regulators that may contribute to water systems’ noncompliance with the LCR. Our analysis was limited to those four factors for which states submit data to EPA’s SDWIS/Fed. Because data were unavailable for all potentially relevant factors, we were unable to include information on the presence of lead pipes, lack of financial capacity, and lack of technical capacity. EPA headquarters officials told us that they were considering the development of indicators of capacity. For example, these officials said that potential indicators suggesting a drinking water system is challenged by capacity are the drinking water system (1) not having raised rates in 20 years; (2) not having recently used asset management; or (3) having experienced difficulty in retaining trained operators. Data on the presence of lead pipes, financial capacity, technical capacity, and other factors may allow for stronger logistic regression models that more accurately identify water systems with a higher likelihood for violations. Appendix V provides a technical description of the statistical analysis we conducted. According to EPA, the agency promulgated the LCR to protect public health by minimizing the levels of lead in the drinking water supply. EPA’s current approach for oversight of the LCR targets water systems with sample results that exceed the lead action level. This approach is reasonable because water systems that exceed the action level have a known and documented lead exposure risk and are required under the LCR to take actions that are considered health-based. This approach, however, primarily incorporates one factorsample results that exceed the lead action leveland does not include the potential of having reported violations across all of the requirements of the LCR. In addition, EPA officials we interviewed in 3 of the 10 regional offices said that they do not have the resources to sustain the agency’s current approach. Under federal standards for internal control, management should identify, analyze, and respond to risks related to achieving the defined objectives. Although EPA may not have the resources to continue the use of its current approach of following up on all sample results that exceed the lead action level, our analysis illustrates that EPA collects data that, where complete and accurate, could be incorporated into a risk-based analysis. For example, such an analysis could be used in individual states or geographical areas, while EPA is taking steps to improve its data and implement SDWIS Prime. A statistical, risk-based analysis, whether it is used for individual states or nationwide, may provide EPA with an additional tool by which it may be able to efficiently target its limited resources for oversight of water systems and meet its goal of reducing the risk of lead exposure. By developing a statistical analysis that incorporates multiple factors—including those currently in SDWIS/Fed and others such as the presence of lead pipes and the use of corrosion control—to identify water systems that might pose a higher likelihood for violating the LCR once complete violations data are obtained such as through SDWIS Prime, EPA could supplement its current efforts to better target its oversight to the water systems that present a higher risk of violating the LCR. EPA has taken several actions to increase transparency about lead hazards, focus on water systems’ sample results over the federal lead action level, and ensure a better understanding of how states and water systems interpret and implement the LCR. However, most states are not submitting data to the SDWIS/Fed database on water systems’ use of corrosion control as required by the LCR. We continue to believe that EPA should take actions to address our 2006 recommendation. Further, by requiring that states report the available information about lead pipes in EPA’s SDWIS/Fed database nationally, EPA and congressional decision makers would have important information at the national level about lead infrastructure, thereby facilitating the agency in its oversight role. The LCR does not require states to submit data to EPA’s SDWIS/Fed database on all 90th percentile sample results for small water systems, only to provide sample results that exceed the lead action level. EPA has long acknowledged the challenges experienced by small water systems, as evidenced in the data for samples that exceed the lead action level and violations for taking samples as required and reporting sample results. The upcoming revision of the LCR provides an opportunity for EPA to require states to report all 90th percentile sample results for small systems. By doing so, EPA would have data to track the changes in lead levels over time among small systems and would be better positioned to assist states in early intervention for small water systems that are near the lead action level where appropriate. EPA also has an opportunity to enhance its oversight of the LCR by using statistical analyses to analyze those data that it currently collects and has determined to be complete. With the LCR applying to about 68,000 water systems across the country (or approximately 45 percent of all drinking water systems), it is important to target limited resources to those water systems that pose the highest likelihood of a violation. By developing a statistical analysis that incorporates multiple factors—including those currently in SDWIS/Fed and others such as the presence of lead pipes and the use of corrosion control—to identify water systems that might pose a higher likelihood for violating the LCR, EPA could supplement its current efforts and better target its oversight to the water systems that present a higher likelihood of violating the LCR, particularly when complete violations data are more readily available through upgrades, such as SDWIS Prime. We are making the following three recommendations to EPA: The Assistant Administrator for Water of EPA’s Office of Water should require states to report available information about lead pipes to EPA’s SDWIS/Fed (or a future redesign such as SDWIS Prime) database, in its upcoming revision of the LCR; (Recommendation 1) The Assistant Administrator for Water of EPA’s Office of Water should require states to report all 90th percentile sample results for small water systems to EPA’s SDWIS/Fed (or a future redesign such as SDWIS Prime) database, in its upcoming revision of the LCR; (Recommendation 2) and The Assistant Administrator for Water of EPA’s Office of Water and the Assistant Administrator of EPA’s Office of Enforcement and Compliance Assurance should develop a statistical analysis that incorporates multiple factors—including those currently in SDWIS/Fed and others such as the presence of lead pipes and the use of corrosion control—to identify water systems that might pose a higher likelihood for violating the LCR once complete violations data are obtained, such as through SDWIS Prime. (Recommendation 3) We provided a draft of this report to EPA for review and comment. In its written comments, reproduced in appendix VII, EPA stated that it generally agreed with all three of our recommendations and the importance of ensuring that the agency has the information needed to ensure effective oversight of the drinking water programs. EPA also provided technical comments which we incorporated, as appropriate. EPA stated that our first two recommendations relate to the LCR revisions: (1) report available information about lead pipes to EPA’s database and (2) report all 90th percentile sample results for small water systems to EPA’s SDWIS/Fed (or a future design such as SDWIS Prime). As a result, EPA said that it would consider our recommendations along with those of other stakeholders as the agency continues to support the development of the proposed LCR for publication in the Federal Register and follows the public review and comment process in 2018. In addition, EPA said that the agency would continue to work with states to develop SDWIS Prime and another electronic reporting tool, which will facilitate electronic reporting, which in turn will increase data accuracy and completeness. In response to our third recommendation, EPA stated that it agrees with the concept of our third recommendation to develop a national statistical analysis that could identify water systems with a higher likelihood of violating the LCR and that the agency previously tried to build a similar tool but faced challenges due to variations between selected factors and violations between states. EPA also said that while developing a national tool would be a challenge, it would be beneficial to both the agency and state primacy agencies. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Administrator of the Environmental Protection Agency, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. This report examines the issue of elevated lead in drinking water and the Environmental Protection Agency’s (EPA) use of compliance data for oversight of the Lead and Copper Rule (LCR). Our objectives were to examine (1) what the available EPA data show about compliance with and enforcement of the LCR among water systems, including schools; (2) how EPA uses these data to monitor compliance; and (3) factors, if any, that may contribute to water systems’ noncompliance with the LCR. We compared our evidence on EPA’s a use of these data for oversight of the LCR to Standards for Internal Control in the Federal Government. According to these standards, internal control is a process by an entity’s oversight body, management, and other personnel that provides reasonable assurance that the objectives of an entity will be achieved. An effective internal control system increases the likelihood that an entity will achieve its objectives. For this review, we used the standard for one of the five components of internal controlrisk assessmentas criteria. To examine what the EPA data show about reported compliance and enforcement, we reviewed LCR data in EPA’s Safe Drinking Water Information System (SDWIS/Fed) for the time period July 1, 2011, to December 31, 2016. We chose this time period because it provided the most recent history of available compliance data without a change in the regulations at the time of our analysis. The LCR data contained information on 67,581 active community water systems and non-transient non-community water systems, including those that were schools or daycare centers with their own water supply. Table 2 provides the water systems, by type and size, included in our analysis. The LCR divides water systems into three broad categories: small, medium, and large. Size is a factor in determining the number of samples that must be collected as well as the applicability and timing of some of the LCR requirements. We reviewed the available data on corrosion control, drinking water sample results, violations of the requirements of the LCR, and state and EPA enforcement actions. We also described the data by population served/size, whether the water system was a school or daycare center, and whether the water system was a community water system or a non- transient non-community water system, among other factors. We analyzed data on sample results for a 3-year time period (from January 2014 to December 2016) and for a 5 1/2-year period (from July 2011 to December 2016). EPA officials told us that they analyze sample data over a 3-year period rather than yearly to ensure that the majority of water systems will have submitted sample results. When presenting a comparison of the sample data and the milestone data on corrosion control, we used the 5 1/2-year period for both sets of data. For violations data, we presented open violations as of December 2016. Violations are considered open when the state has not determined that a water system is in compliance with the specific requirement for which it received the violation. Finally, we presented data on enforcement actions for a 5 1/2- year period (July 2011 through December 2016) to ensure that we provided the most complete picture of the range of state and federal actions taken and to avoid comparisons with the violations data. According to a 2013 EPA compliance report, enforcement data, in any one year, do not necessarily correlate with violations data. In addition, the compliance report states that enforcement actions can be initiated against violations that occurred in a previous year, one enforcement action may address numerous violations at the same system; and it can take several years for a system to return to compliance. We reviewed the data available in the SDWIS/Fed database and the compliance requirements in the LCR to evaluate those aspects of the LCR for which implementation data were available. We interviewed officials from EPA’s Office of Water and Office of Enforcement and Compliance Assurance on the reliability, completeness and accuracy of LCR data in SDWIS/Fed. In addition, we reviewed EPA data reliability assessments, recent file reviews for selected states, a 2017 EPA OIG report on the reliability of SDWIS/Fed sample data, data verification reports and past GAO reports on the reliability of the data in SDWIS/Fed. For example, EPA’s file reviews in some states found that not all violations data were reported to SDWIS/Fed, which could lead to undercounting. In addition, some state regulators told us that samples may be collected incorrectly by some homeowners, which could lead to inaccurate sample results. EPA has stated on its website that the agency acknowledges challenges related to the data in SDWIS/Fed, specifically underreporting of some data by states. GAO has also reported on EPA’s challenges with SDWIS/Fed. Based on this, the compliance data in SDWIS/Fed likely underreport the actual number of sample results that exceed the lead action level, milestones, violations, and enforcement actions, which we note in this report. Because of the incompleteness of reported data on sample results, violations, and enforcement actions, and because of concerns raised by state officials about sample data, we found the data to be of undetermined reliability. For this review, we describe the data about water systems’ compliance with the LCR compliance and EPA’s enforcement actions as they are reported in SDWIS/Fed for the purpose of providing a current assessment of EPA’s use of the data. To examine how EPA uses LCR data to monitor compliance we conducted semistructured interviews with EPA officials. We used a standard set of questions to interview officials in EPA’s headquarters and in each of the 10 regional offices. Our standard set of open-ended questions for EPA’s 10 regional offices asked about state actions responding to EPA’s requests about, among other things, implementation of the LCR, the use of SDWIS/Fed data, enforcement tools, and compliance with the LCR among water systems and schools. We conducted in-person interviews with officials responsible for monitoring compliance in states within EPA regions 1, 2, 3, 4, 5, and 7. We identified these regions based on a 2016 survey that estimated that these regions have the highest number of lead service lines. We spoke with officials in EPA regions 6, 8, 9, and 10 on the telephone. Table 3 provides a list of the EPA regions and the states under the regulatory jurisdiction of those regions. Our in-person interviews with officials in EPA regions 1through 5 and 7 were in offices located in Boston, Massachusetts; New York, New York; Philadelphia, Pennsylvania; Atlanta, Georgia; Chicago, Illinois; and Lenexa, Kansas, respectively. In these cities, we also met with state primacy agencies and local water systems and other local officials, when possible, to obtain examples of compliance and enforcement practices and implementation challenges. Specifically, we met with state drinking water officials in Massachusetts and Georgia. We met with officials representing local water systems in Atlanta, Boston, New York, Chicago, and Kansas City, Missouri. In total, we held 10 interviews with EPA staff in the regional offices and seven interviews with state and local officials in the cities we visited. We also reviewed EPA policy documents that outlined the agency’s enforcement approach and documents related to EPA’s request that states take certain actions following the events in Flint, Michigan. Finally, we reviewed federal regulations; EPA guidance to states and water systems on how to implement the LCR; the 2016 action plan; information on what constitutes a violation of the LCR, action plans, and other relevant documents. To identify the factors that may influence water systems’ risk of noncompliance with the LCR, we conducted a content analysis of information provided by state regulators in discussion groups. To assess whether selected factors available in SDWIS/Fed could be used to predict reported violations, we conducted a statistical analysis of EPA data to develop an illustrative model. We conducted a literature review to identify factors associated with elevated concentrations of lead in public drinking water, human exposure to lead in drinking water, or violations of drinking water laws and regulations. Discussion groups with state regulators. We conducted discussion groups with a nonprobability sample of state drinking water regulators to contribute to our understanding of the potential factors that may influence noncompliance with the LCR. We invited regulators from all states and territories to participate via email. In total, we conducted eight, 1-hour discussion groups over the telephone in September and October 2016. Regulators representing 41 states and 1 territory participated in these discussion groups. Each discussion group had from 2 to 8 states or territory, and each state or territory had a primary designated spokesperson. During each discussion group, the GAO moderator asked participants to list one or two factors that, in their experience, most strongly influence a water system’s ability to comply with the LCR. Each state provided a list of factors. The moderator then asked participants to elaborate on how the factors reported could influence compliance. When necessary, the moderator asked probing questions to further clarify participants’ comments. Two or three analysts transcribed each session and combined and reconciled notes to develop transcripts for each of the discussion groups. We conducted a content analysis of the transcripts from the eight discussion groups to identify the factors most frequently reported by the participants in the groups. Two GAO analysts independently classified each comment using qualitative analysis software. The findings from these discussion groups may not be generalizable to all state regulators. We provide a narrative description of the results of our discussions with the state regulators in appendix III and a technical description of the content analysis we conducted in appendix IV. Statistical analysis. We conducted a statistical analysis to illustrate whether predictive modeling could be used to identify water systems with a higher likelihood of a reported violation of the LCR. To conduct our analysis, we used the same data from EPA’s SDWIS/Fed for systems listed as active as of December 31, 2016 as mentioned above. We selected two statesOhio and Texasbecause EPA’s file reviews indicated that there were not significant discrepancies during the scope and time period of our analysis, which focused on 2013 to 2016, in the LCR data reported by these states to SDWIS/Fed. We reviewed EPA’s 2016 file reviews of data the states provide to SDWIS/Fed and interviewed EPA and state officials for Texas and received written responses to our questions from Ohio. For each of the sampled systems, EPA reviewed state records to determine whether the state was correctly identifying violations and reporting those violations to SDWIS/Fed. Although, unlike EPA’s previous reviews, EPA’s 2016 file reviews are not based on generalizable samples, they were conducted for a broad range of drinking water systems in each of the states. Based on the results of EPA’s reviews, we determined that these two states had sufficiently reliable data for our purposes of illustrating a statistical approach. The results of our analysis for these two states are not generalizable to other states. Our analysis included three steps. We first conducted a bivariate analysis to determine whether the following four factors correspond to violations of the LCR for 2013 to 2014: (1) size of the population served; (2) water source (groundwater or surface water); (3) ownership (public or private); and (4) whether the system is a community water system or non- transient, non-community water system. We also included the factor of whether sample results exceeded the lead action level. We then developed a series of multivariate logistic regression models. Specifically, multivariate logistic regression modeling is statistical method for analyzing the potential influence of each individual factor on the likelihood of a binary outcome (e.g., a violation) while simultaneously accounting for the potential influence of the other factors. We selected this type of model because it could account for the factors simultaneously. Lastly, to test whether our models could be used to identify systems with a higher likelihood of a future violation, we compared the values generated by our models to actual violations reported in the SDWIS/Fed data in 2015 to 2016. We provide a technical description of the statistical analysis we conducted, including determinations about the reliability of the data and the limitations of the analysis, in appendix V. Literature review. We reviewed studies concerning detection of lead in drinking water and violation of drinking water regulations. These studies were identified through searches by GAO research librarians for peer- reviewed materials in such databases as ProQuest, Scopus, Academic One-File, and Web of Science. Librarians conducted searches using such terms and phrases as lead and copper, water supply, drinking water, lead exposure and lead poisoning alone and in combination with one another. We also identified and reviewed relevant publications by trade groups, think tanks, and other nongovernmental organization. We narrowed a preliminary selection of results by reviewing abstracts and introductions, where applicable. Based on that preliminary review, we determined that 31 sources fit within the scope of our engagement objectives. We then reviewed the data and key findings of each of these 31 sources to formulate and refine some hypotheses concerning violations of the LCR and detection of lead in drinking water. The hypotheses were reviewed by a GAO technical expert to ensure that they were sufficiently supported by the cited corresponding research. Environmental Protection Agency (EPA) data from July 1, 2011, to December 31, 2016, can provide information on compliance by water systems and enforcement by states and EPA regarding the Lead and Copper Rule (LCR). This appendix provides additional information from our analysis of what the EPA data in the agency’s Safe Drinking Water Information System (SDWIS/Fed) show about compliance with and enforcement of the LCR. The LCR requires water systems to monitor drinking water at customer taps, and if lead levels are elevated, take additional actions to control corrosion, inform the public, and in some circumstances replace lead service lines under the systems’ control. States generally have primary responsibility for monitoring and enforcement of Safe Drinking Water Act requirements, including the LCR. In this appendix, we provide additional results of our analysis of the LCR data for (1) sample results, (2) violations, and (3) enforcement. We reported in January 2006, that the LCR data, and in June 2011, the data in SDWIS/Fed generally, were not accurate or complete. According to EPA, some of the violations data are underreported. In addition, a 2017 EPA Office of Inspector General report indicated that sample data, specifically, are potentially underreported. In addition, some state regulators with whom we interviewed in 2016 told us that homeowners and water systems may take LCR samples improperly as we discuss in this report. See, also, appendix III for these state regulators’ views on waters systems’ challenges with implementing the sample requirements under the LCR. We present the data that were available in the SDWIS/Fed database at the time of our review. The LCR requires that all water systems periodically obtain tap water samples and for sample results that exceed an action level of 15 parts per billion (ppb) to determine if corrosion control treatments are working properly. EPA requires states to report (1) sample results for any water system whose 90th percentile sample results exceed the federal action level of 15 parts per billion; and, (2) sample results for large and medium water systems even if the sample results do not exceed the lead action level. From January 1, 2014, to December 31, 2016, there were approximately 1,430 water systems reporting sample results over the lead action level (see table 4), the majority of which were small water systems. EPA officials told us that they analyze these sample data over a 3-year period rather than yearly to ensure that the majority of water systems will have submitted 90th percentile sample results. The available EPA data show that almost all of the water systems (1,364, or 95 percent) reporting sample results that exceeded the lead action level from 2014 to 2016 were small and, together, served a population of about 505,000. In contrast, the remaining 66 large and medium water systems (5 percent) reporting sample results that exceeded the lead action level from 2014 to 2016, together, served a population of 2.7 million. In addition, as shown in table 5, states within EPA’s regions 1 and 3 had the highest number of water systems that reported sample results exceeding the lead action level. EPA headquarters officials we interviewed provided possible explanations for why 90th percentile sample results would be higher in these states. They said that there are more lead service lines in the northeastern states, such as those within regions 1 and 3. See table 5. Table 6 provides information on the EPA data available for those water systems that have results under the federal action level in 2016. As previously mentioned, states are to report all 90th percentile sample results for large and medium water systems to EPAthose that exceed and fall below the federal action level. The 6,567 water systems (or 10 percent of all water systems) with reported open violations as of December 2016, had at least one open violation, based on our analysis of the available EPA data. Violations are considered open when the state has not determined that a water system is in compliance with the specific requirement for which it received the violation. Table 7 provides an overview of the number of water systems (including schools and day care centers) with violations of the LCR, by size. Table 8 provides a summary of the available EPA data on the violations of the LCR among schools and daycare centers. The available EPA data show that from July 1, 2011 to December 31, 2016, 99 percent of the 589,827enforcement actions and outcomes were taken by states, as would be expected given that states generally have primary responsibility for enforcement of the LCR . Enforcement actions in SDWIS/Fed include actions taken and what we considered in our analysis as outcomes, such as the receipt of information or a water system having achieved compliance. The enforcement codes in SDWIS/Fed that we defined as outcomes were: federal civil case concluded, federal bilateral compliance agreement signed, federal public notification received, federal no longer subject to rule, federal compliance achieved, federal variance/exemption issued, state civil case concluded, state bilateral compliance agreement signed, state public notification received, state no longer subject to rule, state compliance achieved, and state variance/exemption issued. Collectively, outcomes represented 43 percent (256,107) of the enforcement data in the database. Table 9 provides the number of enforcement actions and outcomes reported from July 1, 2011, to December 31, 2016, at the federal and state levels. The data show that states in regions 6 and 4 had the highest numbers of enforcement actions and outcomes. Table 10 shows the five most frequently reported enforcement actions taken by states for LCR violations as they were reported in SDWIS/Fed as of December 31, 2016. In a 2009 document outlining its enforcement policy, EPA stated that the policy would focus on “return to compliance.” According to this document, “return to compliance” is intended to show the effectiveness of the agency’s protection of public health. The available EPA data show that from July 1, 2011, to December 31, 2016, 10,702 water systems had at least one violation of some type and were returned to compliance (see table 11). As table 11 illustrates, small systems were most frequently designated as returned to compliance for monitoring and reporting violations. State drinking water regulators who participated in discussion groups we conducted identified 29 factors that may contribute to noncompliance with the Lead and Copper Rule (LCR). Of these factors, the state regulators most frequently mentioned size, technical capacity, and sample collection, among other factors. State regulators mentioned other factors less frequently, including requirements to comply with multiple drinking water regulations and the number of water samples required to be collected by the LCR as also contributing to noncompliance. Regulators in 12 states identified factors that they thought specifically helped water systems comply with the LCR. We obtained this information from drinking water state regulators representing 41 states and 1 territory through eight discussion groups held in September and October 2016. The purpose of the discussion groups was to develop an understanding of the factors that may influence noncompliance with the LCR. We analyzed the transcripts of those discussion groups using a content analysis software package. For a detailed description of the methodology we used to conduct these groups and analyze the content of these discussions, see appendix I and appendix IV. State regulators who participated in our discussion groups identified 29 factors that may contribute to water systems’ noncompliance with the LCR. The LCR requires water systems to identify locations where lead may be present and periodically obtain tap water samples from those locations (of which single-family homes are the highest priority). When a water system’s 90th percentile sample result for lead exceeds 15 parts per billion, the system has exceeded the federal action level (also known as an action level exceedance). Sample results that exceed the lead action level do not by themselves constitute violations of the LCR. Under the LCR, an action level exceedance requires the water system and state to take additional steps. Those additional steps require that small and medium water systems install or modify corrosion control treatment and water systems of all sizes provide information (known as public education) about the harmful effects of lead to consumers and vulnerable populations (such as schools if the water system serves a school and public health departments). Water systems are also required to test and, if necessary, treat the source water. If, after installing corrosion control and treating source water, a system continues to have 90th percentile sample results that exceed the lead action level, the LCR requires the water system to begin replacing its lead service lines, if they exist. As part of our analysis, we grouped the 29 factors into seven broad groups: (1) water system characteristics, (2) water system operations, (3) characteristics of water, (4) sample procedures required to comply with the LCR, (5) actions that states take to ensure water systems comply with the LCR, (6) actions that the Environmental Protection Agency (EPA) can take to assist with compliance, and (7) features of the LCR regulation. Figure 4 provides these seven groups and the factors that fell into each one. State regulators we interviewed most frequently identified 10 factors that may contribute to noncompliance of the LCR. Among those were size, technical capacity of operators, and the collection of drinking water samples. The 10 factors fell into the following broad groups: (1) water system characteristics, (2) water system operations, (3) characteristics of water, and (4) sample procedures. To identify the factors most frequently identified by the state regulators as contributing to noncompliance, we focused on those factors that were mentioned by regulators in at least 13 of the 41 states participating in the discussion groups (30 percent). Table 12 provides a description of each factor, the definition we used for our analysis and the number of states in which officials mentioned the factor. State regulators who participated in the discussion groups explained how each of these factors may contribute to noncompliance. In most instances, regulators also described what they observed as relationships between factors and how, together, multiple factors could contribute to noncompliance. Size. Regulators in 37 states said that the size of the population served by water systems may influence noncompliance with the LCR. Regulators in 28 of the 37 states said that small systems (serving populations of 3,300 and fewer) are more likely to have drinking water sample results that exceed the federal action level, to be in noncompliance, or face challenges that may lead to noncompliance. Most of these regulators mentioned the size of a system and the technical, managerial, or financial capacity of the system as factors that, together, may influence noncompliance. For example, regulators in 10 states said that small systems are more likely to receive a violation because they are generally less likely to have operators with the knowledge to properly collect samples (sample collection) or manage corrosion control treatment (technical capacity) or have the financial resources to pay for corrosion control treatment or to hire professional help to do so (financial capacity). A regulator from 1 state provided an example of a small water system in noncompliance because it has a part-time operator with little training on the rule and with other professional responsibilities, such as snow removal and animal control, which prevent this operator from providing drinking water test results to homeowners whose water was tested within the required timeframe. Technical capacity. Regulators in 33 states said that the technical capacity of water systems may influence noncompliance with the LCR. Regulators in 18 of the 33 states said that water systems that do not have personnel with the knowledge to adequately operate a system or who understand the LCR are less likely to have the skill set to interpret and implement the LCR appropriately. Sample collection. Regulators in 28 states said if systems fail to collect drinking water samples, improperly collect samples, or have other problems with collecting samples they may be out of compliance with the LCR. For example, regulators in 9 of these 28 states said that some water systems struggle to find enough homeowners willing to collect water samples for testing and regulators in 3 states said that this may cause the systems to collect samples from taps that are not used for drinking water, contrary to the LCR. In addition, regulators in 14 states said that even when systems are able to find homeowners willing to collect drinking water samples, the homeowners themselves may collect the samples improperly. A regulator from 1 state provided an example of a homeowner who was out of town for the weekend and upon return collected a water sample from tap water that sat stagnant for 4 days, which is problematic because the sample taken should be representative of everyday use. Sample results that exceed the lead action level do not by themselves constitute violations of the LCR. Financial capacity. Regulators in 28 states said that water systems that do not have sufficient financial resources will experience challenges complying with the LCR, including paying for chemicals and professional help needed to install corrosion control treatment. For example, a regulator in 1 state said that a system without adequate financial resources may not be able to pay for the required, and often costly, corrosion control study. Presence of lead. Regulators in 23 states said that the presence of lead in the pipes may influence noncompliance with the LCR. Regulators in 9 of the 23 states said that the presence of lead in pipes increases the likelihood that drinking water samples will exceed the federal action level; and require a system to perform additional actions. Regulators in 11 of the 23 states specifically said that a water system with old infrastructure is more likely have lead service lines. Regulators in 9 states identified the presence of lead service lines and managerial capacity as factors that may work together. These regulators said that water systems that maintain good records of the materials in their distribution systems know about the presence of lead service lines and may be better able to collect drinking water samples from the appropriate locations. Managerial capacity. Regulators in 24 states said that the managerial capacity of water systems may influence noncompliance with the LCR. Regulators in 16 of the 24 states said that water systems that do not have effective management structures and practices will have problems keeping up with the rule requirements and deadlines. Regulators in 6 states explained that proper data and records management help systems comply with the LCR. Water chemistry. Regulators in 23 states said that the chemistry of the water may influence noncompliance with the LCR. Regulators in 20 of the 23 states said that having corrosive water increases the likelihood of samples that exceed the action level—which is not a violation—and will require a system to perform additional actions. For example, if these systems do not install corrosion control or manage it properly—for example, because the operator does not understand water chemistry—the system will get a violation, according to regulators in 6 states. Corrosion control. Regulators in 16 states said that the corrosion control may influence noncompliance with the LCR. Specifically, regulators in 15 of the 16 states said that water systems that have installed corrosion control treatment are more likely to be in compliance with the LCR because corrosion control is the primary method used to prevent lead from entering drinking water. A regulator from 1 state said that despite the corrosive water that exists in that state, water systems are not getting samples that exceed the action level and are staying in compliance because they have corrosion control installed. Regulators also discussed how the size of the system and corrosion control, together, can influence compliance. Regulators in 5 states said that large water systems are generally in compliance with the LCR because the rule requires them to install corrosion control treatment. Type. Regulators in 18 states said that the type of water system may influence noncompliance with the LCR. Regulators in 14 of the 18 states said schools and daycare facilities with their own water supplies experience challenges in complying with the LCR, and regulators in 6 states explained that it is because their primary mission is not water delivery and management. A regulator in 1 state said that they had a school submit improper samples because school officials collected samples after the summer break during which the faucets had not been used for 6 weeks, thus not being representative of normal drinking water use. Sample results that exceed the lead action level do not by themselves constitute violations of the LCR. Source. Regulators in 14 states said that the source of drinking water may influence noncompliance and offered a range of opinions as to how corrosive or non-corrosive groundwater may influence actions. State regulators frequently discussed source water, water chemistry, and corrosion control as factors that presented themselves together. State regulators in 4 states said that systems using ground water can more easily comply with the LCR because ground water is non- corrosive compared to surface water. In contrast, regulators in 4 states said that the ground water in other parts of the country is more corrosive. However, regulators in 5 states said that systems with corrosive water sources are still able to comply when they properly install and manage corrosion control treatment. State regulators who participated in our discussion groups identified additional factors that may contribute to water systems’ noncompliance with the LCR, though less frequently. These regulators cited factors such as compliance with multiple drinking water rules, the number of samples that systems are required to collect under the LCR, and the complexity of the LCR. Table 13 describes each factor less frequently mentioned as contributing to noncompliance, the definition of the factor, and the number of states in which officials mentioned the factor. Some of these factors are more specific attributes that may impact of some the 10 factors that were most frequently identified by state regulators. For example, regulators told us about several aspects of water (age, stability, and flow) that may impact water chemistry. The regulators participating in our discussion groups provided examples of these less frequently mentioned factors, below: Simultaneous compliance. Regulators in nine states said that systems that have to simultaneously comply with multiple drinking water regulations can lead to noncompliance with the LCR and regulators in three of the nine states explained that this is because changes to water treatment to address one problem can create additional problems. For example, regulators in five states said that systems that have to comply with a rule aimed at reducing drinking water exposure to disinfection byproducts may require a reduction in the pH level of their water, and this, in turn, may affect the effectiveness of their control corrosion treatment. Much like other factors, this causes samples results to exceed the federal action level which subjects the water system to additional rule requirements. Lead and Copper Rule. Regulators in seven states said that aspects of the LCR may influence noncompliance. Regulators in two of the seven states said that the LCR does not require states to routinely approve material surveys or for systems to update these surveys periodically, which can prevent water systems from knowing if they are collecting samples from high-risk sites. In addition, regulators in four states said that the LCR allows too much time for systems to complete requirements, such as the installation of corrosion control treatment and the issuance of public education notices to consumers. For example, a regulator in one state provided an example of a system that started the process of installing corrosion control. However, the system stopped the treatment installment because, as allowed by the LCR, the system sampled the water again and did not exceed the action level. Regulators in three of the states also said that the LCR does not allow state regulators to invalidate samples that they know were taken using poor practices at the sample site., EPA guidance. Regulators in seven states said that EPA’s guidance may contribute to noncompliance, and according to regulators in four states, this is because the guidance may not be clear which may cause states and water systems to incorrectly implement the LCR. For example, regulators in three states said that EPA guidance on sample procedures and public education was confusing for states and water systems because it is not clear about the timeframes that systems should adhere to when repeating the collection of water samples or providing public education to ensure that they conduct these actions properly and in accordance with the LCR. Regulators also identified several additional factors that could lead to noncompliance or to 90th percentile sample results over the action level and thus additional requirements for water systems to implement, which could increase the chances of a violation. For example, regulators mentioned that the ownership of a water system could be a factor and provided the example of privately-owned, small water systems with less knowledgeable or available operators. Regulators also said that the age of the water can interfere with corrosion control and that some systems buying treated water are not doing any treatment themselves. Finally, regulators in two states said that water systems that are geographically isolated may not be able to access alternative water sources if their existing source water is corrosive or to attract operators with the skills to implement the LCR. Regulators in 12 states specifically identified factors that they thought helped water systems comply with the LCR (see table 14). Regulators in 7 states said that assistance from the states helps water systems comply with the LCR by providing systems with information about the requirements of the rule, training or technical assistance including using state rural water associations. Regulators in 4 different states said that the engagement of the water system with the state regulatory office—for example, through training—places the system in a better position to implement the LCR because they are gaining an understanding of the requirements. Further, regulators in 1 state said that support from state and local decision makers provides water system managers with the tools they need to implement the rule appropriately. We conducted eight discussion groups with drinking water regulators representing 41 states and 1 territory to develop an understanding of the potential factors that may influence noncompliance with the Lead and Copper Rule (LCR). These were hour-long discussions conducted over the telephone. We held these discussion groups in September and October 2016. From two to eight states participated in each discussion group and each state had a primary designated spokesperson. For more information about our overall methodology, see appendix I. In each discussion group, the moderator asked two questions. First, the moderator asked participants to list one or two factors that most influence a water system’s ability to comply with the LCR. Each state provided a list of factors. After all of the states responded, the moderator noted the factors provided by the group participants and asked for consensus on the list of factors reported. Second, the moderator asked participants to elaborate on how the factors reported could influence compliance. When necessary, the moderator asked probing questions to further clarify participants’ comments. Two or three analysts transcribed each session and combined and reconciled notes to develop transcripts for each of the discussion groups. Using the factors that participants mentioned in each discussion group, we compiled an initial aggregate list of factors. We reviewed the initial list to determine if certain factors were closely related and could be combined. To check for completeness, we reviewed the transcripts and noted factors that participants repeatedly mentioned throughout the discussion groups but that were missing from the current list, and we added them to the list. This allowed us to delete some factors and incorporate them into other factors under which we determined they could reasonably fit. For example, we determined that “water chemistry” and “water corrosivity” were too closely related to be separate factors, so we combined them. Our goal was to develop a list of complete, distinct and mutually exclusive factors based on the information that participants shared in the discussion groups. We took additional steps to ensure that we identified significant factors by conducting a word frequency count in all of the discussion group transcripts using a content analysis software package. We grouped similar wordsfor example, “system,” “systems and systems’” so that they were counted together. We determined that the top 11 words identified by the frequency count—which were mentioned 100 times or more—represented factors that we already had in our list. We also determined that the top 50 words identified by the frequency count — which were mentioned 32 times or more—represented factors that were already on our list. To have a clear and consistent understanding of each factor for classification purposes, we defined each factor using information from the LCR, other federal regulations, Environmental Protection Agency guidance to states and water systems, and published GAO reports. Using the factors and their definitions, we developed a guide to use in the classification process. To identify broad themes when classifying comments in the transcripts, we developed groups under which the factors could reasonably fit. We took steps to make every group distinct and mutually exclusive and to ensure that every factor fell into its associated category. For example, we determined the factors “system size,” “system type,” and “financial capacity of system,” could naturally be grouped under “water system characteristics”. Over the course of several meetings, four analysts reviewed and finalized the factors and their associated groups. We agreed on a final list of 29 factors, which included issues like “system size,” “corrosion control,” and “water source.” The 29 factors were placed under seven groups, including “water system characteristics,” “water system operations,” and “characteristics of water.” For a detailed discussion of these factors, see appendix III. To analyze the content of the discussion groups, using a content analysis software package, two analysts independently classified each comment in the transcripts into the factors we defined, and one analyst analyzed the classification to identify the factors that were most frequently reported. During the classification process, the analysts classified each participant’s individual statements in the transcripts separately. For the purposes of analysis, we defined an individual comment to be a statement made by a single individual. Across the eight discussion groups, there were a total of 225 such comments. Some comments were brief and covered a single issue while others were extensive and covered multiple issues. The analysts applied multiple classifications if statements covered a range of factors. For example, a statement made by a specific drinking water regulator could have been classified as relating to the source of the water and the size of the system. The analysts only coded statements made in response to the moderators’ questions. The analysts did not code statements that did not discuss factors or directly answer the moderators’ questions. After independently coding the transcripts, we used software to run an intercoder reliability report. The two analysts met on three occasions to compare and discuss the coding results. In instances where the analysts applied different codes to the same statement, they discussed their reasoning and reached agreement on which codes were the most appropriate. Each analyst then updated the database to reflect the agreements reached. We also used software to identify the factors most frequently reported by the participants. We determined these by identifying the number of states that reported each factor (regardless of how many times a factor was mentioned) because this approach presented the number of states that agreed on the validity of each factor as contributing to noncompliance. To do this, we used software to cross-tabulate the factors that were classified with the states that participated in the discussion groups. During the classification process, we classified each participant’s statements as (1) the state the participant represented and (2) the factors that the statement covered. For example, an individual statement could have been classified as “Texas” and “water source.” Thus, for each factor, the cross tabulation showed which states made statements that were classified into that factor. We also ran a cross tabulation of the seven broad groups (under which the factors were grouped) and the states that reported each group. We identified the factors that were most frequently reported by focusing on those that were reported by at least 30 percent of the states, and reported this information in the report. To identify any factors that may contribute to noncompliance with the Lead and Copper Rule (LCR), we conducted discussion groups with a nonprobability sample of state drinking water regulators representing 41 states and 1 territory. We conducted a literature review of 31 academic studies about the detection of lead in drinking water and violations of drinking water regulations to corroborate our findings from the discussion groups. Our discussion groups with state regulators and review of academic studies suggested that certain factors could indicate whether water systems are at a higher likelihood for violating the LCR. To determine whether data on these factors could be used to predict LCR violations, we developed a series of statistical models, specifically multivariate logistic regression models. To conduct our analysis, we used the available data from the Environmental Protection Agency’s (EPA) Safe Drinking Water Information System (SDWIS/Fed) database for community water systems and non-transient non-community water systems active as of December 31, 2016, in 2 states, Ohio and Texas. We selected these states because recent EPA file reviews did not find significant discrepancies in the LCR violations data reported to SDWIS/Fed. In both states, we found that water systems with some factors were significantly more likely to violate the LCR than systems without those factors. Furthermore, we found that our models, which were based on data for 2013 and 2014, could predict systems with a higher likelihood of a violation in 2015 and 2016 significantly better than chance. Our analysis is limited because it is based on 2 states and thus, not generalizable to other states. It is also based on a subset of the relevant factors that might predict LCR violations and therefore is illustrative of the potential for statistical models to predict violations rather than the definitive model of violations. In a review of previous GAO reports and peer-reviewed literature, we found that statistical models have been used to predict the risk of a violation for regulated entities. For example, in October 2016, we reported on the potential for statistical models to identify motor carriers that posed a high risk of a highway crash. In addition, several peer-reviewed studies have developed statistical models to predict the likelihood of drinking water systems violating Safe Drinking Water Act requirements. Based on this prior research, we considered predictive modeling as a potential approach to identify drinking water systems with a higher likelihood of violating the LCR. The specific steps we took to conduct this analysis are described below. To conduct our analysis, we used the available data on community water systems and non-transient non-community water systems active in SDWIS/Fed as of December 31, 2016. We analyzed data for drinking water systems that serve more than 25 people, which is EPA’s size threshold for a public drinking water system. EPA’s SDWIS/Fed database contains descriptive data on water systems (e.g., size, location, and water source), drinking water sample results, violations, and enforcement actions, as required by the LCR. Generally, states with primary enforcement responsibility initiate enforcement actions against water systems that do not comply with the LCR and other drinking water regulations. The LCR requires states to submit certain data to EPA’s SDWIS/Fed database on a quarterly basis. In our discussion groups with state regulators and review of academic studies and peer-reviewed literature, we identified 29 factors that may influence a drinking water system’s noncompliance with the LCR (see apps. I and III). We examined the SDWIS/Fed database to identify data elements that might represent these factors. Of the factors that were consistent with findings reported in the literature we reviewed and reported by state regulators in discussion groups, we selected four that were available in SDWIS/Fed to conduct a statistical analysis: the population served by (or size of) the drinking water system, whether the drinking water system was publicly or privately owned, whether the drinking water system used groundwater or surface water as a source, and whether the drinking water system was classified as a community water system or a non-transient non-community water system. In addition, EPA’s current approach for targeting oversight of the LCR is to identify water systems with sample results that exceed the lead action level. Therefore, we also included the factor of whether the system had sample results exceeding the lead action level. We conducted steps to assess the reliability, completeness, and accuracy of the LCR compliance data in SDWIS/Fed for the purpose of conducting this analysis. We determined that data in SDWIS/Fed were not sufficiently reliable to conduct a nationwide statistical model of LCR violations. We could not verify that the limitations in the completeness of the data identified in our June 2011 report had been sufficiently addressed nationwide. Thus, we could not be assured that the LCR violations data submitted to SDWIS/Fed were sufficiently complete, accurate, or comparable across the states. Instead, we used the data in SDWIS/Fed to conduct an illustrative analysis for two statesOhio and Texas. We selected these states because EPA’s recent reviews of the completeness and accuracy of LCR data reported by these states did not find significant discrepancies in LCR violations data. We examined EPA’s reviews and either obtained written responses to questions or interviewed EPA and state officials to determine that these two states had sufficiently reliable data for this purpose. EPA’s reviews were not based on statistically representative samples of drinking water systems in these states. Therefore, we cannot conclude definitively that the agency has addressed problems with the completeness and accuracy of violations data. However, we found that these states had sufficiently reliable data for the purpose of testing the feasibility of statistical modeling to predict drinking water systems with a higher likelihood of violating the LCR. Before developing the logistic regression model, we analyzed whether or not each water system in the two states violated the LCR from January 1, 2013, to December 31, 2014, with respect to each of the four selected factors. We conducted this analysis with cross tabulations and graphical analysis. In cross tabulations, each of the factors we examined was significantly associated with LCR violations, although the nature of these relationships varied between the states. In general, in both states, privately owned systems were more likely to violate the LCR than publically-owned systems, community water systems were less likely to violate the LCR than non-community non-transient water systems; and systems that had sample results exceeding the lead action level were more likely to violate the LCR than those that had not. In Ohio, water systems that used groundwater were more likely to violate the LCR than surface water systems, whereas, in Texas, water systems that used purchased groundwater were less likely to violate the LCR. In graphical analysis, we found that the likelihood of a violation was related to the size of the population served by the water system. For example, in Ohio, grouped data plots displayed a negative, linear relationship between the likelihood of a violation and the number of people served by a system. In Texas, these plots displayed a negative, linear relationship for systems serving 3,300 people or fewer and a positive linear relationship for systems serving larger populations. The threshold of 3,300 people is the threshold that the LCR uses to distinguish small systems. The results of these cross tabulations were illustrative of factors influencing violations, but they provided only a partial assessment of the relationship between LCR violations and the factors. This is because the cross tabulations compared LCR violations with each factor individually without accounting for the influence of the other factors. For example, we found in our analysis that while systems that had exceeded the lead action level were more likely to have a violation than systems that had not exceeded this level, such systems are also more likely to serve smaller populations. Because these factors are related, bivariate cross tabulations cannot distinguish between their respective influences on the likelihood that a system violated the LCR. To account for multiple factors simultaneously, we developed logistic regression models. We developed a series of logistic regression models for each state to determine whether factors collectively could identify the likelihood that a water system would violate the LCR. A logistic regression model is an equation, which is developed through statistical procedures, that estimates the individual association of each factor with the likelihood of a violation, while simultaneously accounting for the association between each of the other factors and the likelihood of a violation. It provides a basis for combining multiple variables to predict outcomes and is more inclusive than the bi-variate analysis described in the previous section. ) for groups of systems ranked by the size of the population served. whether the drinking water system was classified as a community water system or a non-transient non-community water system. We also included whether the system had a sample result exceeding the lead action level during the monitoring period from January 1, 2012, to December 31, 2014. We specified different logistic regression models for each state because of differences in the distributions of the data between the states. For example, in Ohio, nearly all non-transient, non-community water systems used ground water as their primary source water, which made it difficult to disentangle the unique effects of community water systems from those of ground water systems. Therefore, we collapsed non-community water systems using groundwater and non-community water systems not using ground water into a single group for analysis. In Texas, the relationship between the likelihood of a violation and size of the population served shifted as the number of people served by a system reached 3,300. Therefore, we added a term to our logistic regression equation for Texas that allowed us to account for the difference in the relationship between size and the likelihood of a violation or systems below the 3,300 threshold and systems above that threshold. To test the adequacy of these models, we verified that our data contained a sufficient number of systems with each combination of characteristics, that it adequately fit the data based on the chi-squared goodness-of-fit tests, and that estimated effects were generally stable across multiple model specifications. We tested for nonlinear relationships between the likelihood of a violation and the size of the population served by a system and we transformed the variable accordingly. We also tested for interaction effects between the categorical system characteristics and the size of the population served by the system. In each state, we tested several model specifications to identify the combinations and transformations of variables that best met these conditions. Among the models we tested, the best-fitting model for Ohio included explanatory variables for whether the system had a sample result exceeding the lead action level, whether the system was privately owned, the size of the population served by the system, whether the system was a non-community water system using groundwater, and whether the system was a non-community water system not using ground water. This model also included an interaction term between community water systems using groundwater and the size of the population served by the system. The data for this model included 1,849 systems, of which 137 violated the LCR in the compliance periods that began in 2013 and 2014 and 1,712 of which did not. The model had an adequate fit to the data based on chi-squared and Hosmer-Lemeshow goodness-of-fit tests, and it had a good accuracy in predicting LCR violations in 2013 or 2014 based on the area under the Receiver Operator Characteristic (ROC) curve. The best-fitting model for Texas included explanatory variables for whether the system had a sample result exceeding the lead action level; whether the system was privately owned; whether the system was a community water system; whether the system used groundwater as a source of water; the size of the population served by the system; and a linear spline term, which accounted for a different relationship between system size and the likelihood of a violation for systems served more than 3,300 people. Data for this model included 5,395 systems, of which 2,321 violated the LCR in the compliance periods that began in 2013 and 2014 and 3,074 of which did not. The model had an adequate fit to the data based on the chi-squared and Hosmer-Lemeshow goodness–of-fit tests, and a moderate accuracy in predicting violations in 2013-14 based on the area under the ROC curve. In each model, we found that certain factors were consistently associated with violations. For example, in both states, water systems that had reported a previous sample result exceeding the lead action level were significantly more likely to violate the LCR. The size of the population served was a statistically significant predictor of a violation in both states but in different ways. In Ohio, water systems were less likely to violate the LCR as their size increased. In Texas, water systems were less likely to violate the LCR as the size of their population increased to 3,300 but were more likely to violate the LCR as the size of the population over 3,300 increased. These patterns persisted in our models even after accounting for whether the system was privately owned, whether the system was a community water system and whether the system used groundwater as a source of water. These three other factors were also associated with the likelihood of a violation in some of our models, but the direction, magnitude and the significance of these associations were not consistent This could be the result of strong associations among the factors, which would make it difficult for the models to precisely estimate their association with violations. Because this imprecision, we do not report the associations between these three factors and the likelihood of a violation. Since the purpose of these models was to identify drinking water systems with a higher likelihood of a violation, rather than to estimate the influence of specific factors on the likelihood of a violation, we focus on the predictive accuracy of these models as described in the next section. To test whether our models could be used to predict water systems with a higher likelihood of a future violation, we compared the predicted violation results from our models to actual violations that were reported in SDWIS/Fed for 2015 and 2016. Our models, which were based on data from 2013 and 2014, predicted subsequent violations in 2015 and 2016 significantly better than chance. Systems with higher average predicted probabilities of violations had higher observed rates of violations in the subsequent year than systems with lower predicted probabilities, and this difference was statistically significant. To make this determination, we took three steps. First, we used our models to estimate the likelihood that each system violated the LCR in 2013 or 2014 based on the factors identified in the logistic regression models. Second, we divided the water systems into five equally sized groups, referred to as quintiles, based on their estimated likelihood of a violation. Third, we compared the percentage of systems that violated the LCR in 2015 or 2016 across each of these five groups. We found that systems in the highest likelihood group, based on our models of 2013 to 2014 data, had significantly higher violation rates in 2015 and 2016 as compared to systems in the lowest likelihood group. This result was true for each of the two states and for each of the models that we tested in those states. The tests of predictive accuracy in 2015 and 2016 for the best-fitting models in each state are shown in table 15. For example, in Ohio, 7.9 percent of systems in the fifth quintilethe group with the highest violation likelihood scoresviolated the LCR in 2015 or 2016 as compared to 2.4 percent of those in the first quintile, the group with the lowest likelihood scores. Similarly in Texas, 43.1 percent of systems with the highest likelihood scores violated the LCR in 2015 or 2016 as compared to 21.4 percent of those in the lowest likelihood group. Based on our illustrative analysis, we found that statistical models could be used to predict water systems with a higher likelihood of violating the LCR. However, our analysis was subject to certain limitations. First, our models used only data for factors available in SDWIS/Fed. They did not include other factors that might be important to predicting violations, such as the treatment technique used by a drinking water system; the presence of lead pipes in a community; or the technical, financial, and managerial capacity of a drinking water system. Second, our models were limited to the two states for which we could obtain reasonable assurances of data reliability, and therefore, the results are not generalizable to other states. While we found some commonalities in the factors that may contribute to violations between the states, we also found several differences between them, suggesting that specific factors may influence violations differently in different states. Finally, while we took several steps to confirm that the data for these states were sufficiently reliable for the purpose of developing illustrative regression models to predict violations, we cannot be confident that the inaccuracies and incompleteness that we and EPA identified in June 2011 have been addressed nationwide. Reliable and sufficient data for additional states would increase the external validity of future analysis. Additionally, data for additional explanatory variables mentioned in the literature and by state regulatorssuch as the presence of lead service lines and the technical, financial, and managerial capacity of a systemwould allow for a more fully specified model with the potential to increase the explanatory power of those models. Taken together, reliable data for a broader sample and a fuller range of explanatory variables could potentially improve the usefulness of models predicting LCR violations. The Environmental Protection Agency (EPA) provides information on its website for the public on lead hazards in drinking water. EPA’s website includes, among other documents, a February 2005 fact sheet for the public entitled Is There Lead in My Drinking Water? (see fig. 5). In addition to the individual named above, Diane Raynes (Assistant Director); Jennifer Beddor; David Blanding, Jr.; Mark Braza; Richard P. Johnson; Tahra Nichols; Jerry Sandau, and Karen Villafana made key contributions to this report. In addition, Sarah Gilliland, Lindsay Juarez, Maureen Lackner, Dan Royer, and Kiki Theodoropoulos made important contributions to this report. Water Infrastructure: Information on Selected Midsize and Large Cities with Declining Populations. GAO-16-785. Washington, D.C.: September 15, 2016. Water Infrastructure: EPA and USDA Are Helping Small Water Utilities with Asset Management; Opportunities Exist to Better Track Results. GAO-16-237. Washington, D.C.: January, 27, 2016. Environmental Health: EPA Has Made Substantial Progress but Could Improve Processes for Considering Children’s Health. GAO-13-254. Washington, D.C.: August 12, 2013. Drinking Water: Unreliable State Data Limit EPA’s Ability to Target Enforcement Priorities and Communicate Water Systems’ Performance. GAO-11-381. Washington, D.C.: June 17, 2011. Environmental Health: High-level Strategy and Leadership Needed to Continue Progress toward Protecting Children from Environmental Threats. GAO-10-205. Washington, D.C.: January 28, 2010. Drinking Water: The District of Columbia and Communities Nationwide Face Serious Challenges in Their Efforts to Safeguard Water Supplies. GAO-08-687T. Washington, D.C.: April 15, 2008. Drinking Water: EPA Should Strengthen Ongoing Efforts to Ensure That Consumers Are Protected from Lead Contamination. GAO-06-148. Washington, D.C.: January 4, 2006. District of Columbia’s Drinking Water: Agencies Have Improved Coordination, but Key Challenges Remain in Protecting the Public from Elevated Lead Levels. GAO-05-344. Washington, D.C.: March 31, 2005. Drinking Water: Safeguarding the District of Columbia’s Supplies and Applying Lessons Learned to Other Systems. GAO-04-974T. Washington, D.C.: July 22, 2004.
Drinking water contaminated with lead in Flint, Michigan, renewed awareness of the danger lead poses to the nation's drinking water supply. Lead exposure through drinking water is caused primarily by the corrosion of plumbing materials, such as pipes, that carry water from a water system to pipes in homes. EPA set national standards to reduce lead in drinking water with the LCR, which applies to all water systems providing drinking water to most of the U.S. population, except places where people do not remain for long, such as campgrounds. States generally have primary responsibility for enforcing the LCR, and data help EPA monitor states' and systems' compliance with the LCR. GAO was asked to review the issue of elevated lead in drinking water. Among other objectives, this report examines (1) what available EPA data show about LCR compliance among water systems and (2) factors that may contribute to LCR noncompliance. GAO analyzed EPA data on violations and enforcement of the LCR from July 1, 2011, through December 31, 2016, interviewed EPA officials in headquarters and the 10 regional offices; conducted a statistical analysis of the likelihood of reported LCR violations; and held discussion groups with a nonprobability sample of regulators representing 41 states. Available Environmental Protection Agency (EPA) data, reported by states, show that of the approximately 68,000 drinking water systems subject to the Lead and Copper Rule (LCR), at least 10 percent had at least one open violation of the rule; however these and other data are not complete. When the LCR was promulgated in 1991, all water systems were required to collect information about the infrastructure delivering water to customers, including lead pipes (see figure). However, because the LCR does not require states to submit information on known lead pipes to EPA, the agency does not have national-level information about lead infrastructure. After the events in Flint, Michigan, and other cities, EPA asked states to collect information on the locations of lead pipes, and all but nine, which had such difficulties as finding historical documentation, indicated a plan or intent to fulfill the request. According to EPA guidance, knowledge of lead pipes is needed for studies of corrosion control. GAO reported in March 2013 that with limited funding for federal programs, the need to target such funds efficiently increases. By EPA requiring states to report data on lead pipes, key decision makers would have information about the nation's lead infrastructure. Through discussion groups, state regulators identified 29 factors that may contribute to water systems' noncompliance with the LCR. In conducting a statistical analysis using EPA data on selected factors, such as the size of the population served and type of source water, GAO found that such factors were associated with a higher likelihood of water systems having reported violations of the LCR. EPA's current approach to oversight of the LCR targets water systems with sample results that exceed the lead action level. While this approach is reasonable because such water systems have a documented lead exposure risk, EPA officials in 3 of the 10 regional offices told GAO that it is not sustainable over time because of limited resources. Under federal standards for internal control, management should identify, analyze, and respond to risks related to achieving the defined objectives. By developing a statistical analysis that incorporates multiple factors to identify water systems that might pose a higher likelihood for having reported violations of the LCR to supplement its current approach, EPA could better target its oversight to such water systems. GAO is making three recommendations, including for EPA to require states to report data on lead pipes and develop a statistical analysis on the likelihood of LCR violations to supplement its current oversight. EPA agreed with GAO's recommendations.
Following problems with reconstruction efforts in Iraq in the fall of 2003, an internal State report concluded that the U.S. government had no standing civilian capacity to plan, implement, or manage stabilization and reconstruction operations; and the United States had relied on ad hoc processes for planning and executing these efforts. State recommended the establishment of a new office to provide a centralized, permanent structure for planning and coordinating the civilian response to stabilization and reconstruction operations. Accordingly, in August 2004, Secretary of State Powell announced the creation of S/CRS to coordinate U.S. efforts to prepare, plan, and resource responses to complex emergencies, failing and failed states, and post- conflict environments. Such efforts could involve establishing security, building basic public services, and supporting economic development. The Consolidated Appropriations Act of 2005 granted statutory authorization for S/CRS within the Office of the Secretary of State. In December 2005, President Bush issued NSPD-44 to improve the coordination, planning, and implementation of reconstruction and stabilization operations. NSPD-44 assigned the Secretary of State responsibility for planning and coordinating U.S. government stabilization and reconstruction operations in countries and regions at risk of, in, or in transition from conflict or civil strife. The Secretary, in turn, delegated implementation of the directive to the Coordinator for Reconstruction and Stabilization. NSPD-44 identifies the specific roles, responsibilities, and coordination requirements of U.S. government agencies that would likely participate in stabilization and reconstruction operations. It also requires State to lead the development of a civilian response capability, including the capacity to ensure that the United States can respond quickly and effectively to overseas crises. Finally, NSPD-44 established the NSC Policy Coordination Committee for Reconstruction and Stabilization Operations, which is co-chaired by the Coordinator for Reconstruction and Stabilization and NSC, and is comprised of representatives from other executive agencies. S/CRS has led an interagency effort to develop a framework for planning and coordinating U.S. stabilization and reconstruction operations. NSC has adopted two of three elements of the framework—the Interagency Management System and procedures for initiating its use. One element—a guide for planning stabilization and reconstruction operations—is still in progress. As of October 2007, the framework has not been fully applied to any operation. In addition, guidance on roles and responsibilities for State’s bureaus and offices is unclear and inconsistent, and the lack of an agreed-upon definition of a stabilization and reconstruction operation poses an obstacle to interagency collaboration. In addition, some interagency partners have shown limited support for the framework and S/CRS. Some partners described the proposed interagency planning process as cumbersome and time consuming. S/CRS is taking steps to strengthen the framework’s effectiveness by addressing agencies’ concerns and providing training to interagency partners, but differences between the planning capacities and procedures of U.S. government civilian agencies and the military pose obstacles to effective coordination. S/CRS has led an NSC interagency group to create a framework for developing specific reconstruction and stabilization plans under NSPD-44. Sixteen U.S. agencies participated in NSC interagency working groups tasked with developing the framework, including DOD. The framework is intended to guide the development of U.S. planning for reconstruction and stabilization operations by facilitating coordination across federal agencies and aligning interagency efforts at the strategic, operational, and tactical levels. Key elements of the framework include the Interagency Management System (IMS) for managing high-priority and highly complex crises and operations, a guide for planning specific reconstruction and stabilization operations, procedures for initiating government-wide planning, including the IMS and the planning guide. IMS, the first element of the framework, was created to manage high- priority and highly complex crises and operations. IMS is a system for guiding communication and coordination between Washington policymakers and Chiefs of Mission, and civilian and military planners. In March 2007, NSC approved IMS and, with the Cabinet Secretaries and Deputy Secretaries, would determine whether IMS is required for a specific operation. If IMS is used, it would consist of three interagency groups: a Country Reconstruction and Stabilization Group (CRSG), an Integration Planning Cell (IPC), and an Advance Civilian Team (ACT) (see fig. 1). CRSG would be responsible for developing and integrating U.S. government policies, integrating civilian and military plans, and mobilizing civilian responses to stabilization and reconstruction operations. It would be comprised of the NSC policy coordination committee responsible for the country or region and would be supported by a secretariat comprised of staff from multiple agencies that develop the plans in conjunction with Chiefs of Mission and the U.S. military. CRSG also would mobilize resources, monitor and evaluate implementation, and coordinate with international partners. IPC would be responsible for integrating U.S. civilian agencies’ plans with military operations. IPC members would include civilian agency staff with country-specific, functional, or planning expertise. IPC would be located at the headquarters of the military combatant command responsible for planning military operations but would report to the CRSG rather than the combatant commander. IPC would not be formed when planning and implementing operations that do not require military actions. ACT would be deployed to the U.S. embassy, if one exists, to set up, coordinate, and conduct field operations and provide implementation planning and civilian-operations expertise to the Chief of Mission and military field commanders. ACT could be supported by Field Advance Civilian Teams (FACT) to help implement reconstruction and stabilization programs at the provincial or local levels. The second element of the framework, the planning guide, has not been approved by NSC because State is rewriting the guide to address interagency concerns. Although NSC is not required to approve the planning guide, S/CRS officials stated that NSC approval would strengthen the framework’s overall standing among interagency partners. Without NSC approval, the framework lacks the authority needed for interagency use. The planning guide divides planning for stabilization and reconstruction operations into three levels: policy formulation, strategy development, and implementation planning (see fig. 2). As currently envisioned, the guide states that goals and objectives at each level should be achievable; have well-defined measures for determining progress; and have goals, objectives, and planned activities that are clearly linked. At the first level of planning, policy formulation, Washington-based policymakers would articulate the overall goal or desired outcome the United States plans to achieve. At the second level, strategy development, the same Washington policymakers, in conjunction with the relevant Chiefs of Mission, would define the major objectives and essential tasks necessary to achieve the overarching policy goal, the resources necessary for completing each objective, and the implementing agency or bureau. At the third level, implementation planning, the agencies, bureaus, and overseas posts responsible for implementing the programs and tasks for achieving the objectives would develop work plans, resource requirements, and metrics for monitoring progress. The third element, which the NSC approved in March 2007, establishes procedures for using the framework when agencies are responding to an actual or imminent crisis or engaging in long-term scenario-based planning. Factors that may trigger a U.S. response to a crisis include the potential for significant military action in the near-term; actual or imminent state failure; events with significant potential to undermine regional stability and development progress, such as coups, economic collapse, or severe environmental damage; large-scale displacement of people; and impending or actual genocide, ethnic cleansing, or massive and grave human-rights violations. Planning for crisis responses may be initiated by the NSC (including the Cabinet Secretaries, Deputy Secretaries, or Policy Coordination Committees) or by a direct request from the Secretary of State or the Secretary of Defense. Long-term scenario planning may be conducted for crises that may emerge within 2 to 3 years. NSC, Chiefs of Mission, and Regional Assistant Secretaries of State may request the initiation of long-term scenario-based planning based on five criteria: (1) the potential impact on U.S. national security and foreign-policy objectives; (2) the regional impact or scale of humanitarian needs; (3) the potential for significant U.S. military involvement; (4) the probability of a crisis occurring, as indicated by U.S. government agencies, the United Nations, or other international organizations; and (5) the ability of the affected country or neighboring countries to respond to a crisis. As of October 2007, the framework has not been fully applied to any stabilization and reconstruction operation. S/CRS and interagency partners have used draft versions of the planning guide to plan operations in Haiti, Sudan, and Kosovo, but implementation of the resulting plans has been limited. Only the plan for Haiti was implemented. The plan for Sudan was not implemented because it was completed just as the government of Sudan and opposition groups signed a peace accord. Interagency planning for potential operations in Kosovo is ongoing. According to State officials, the administration is using interagency processes created in NSPD-1 National Security Council System for operations in Afghanistan and Iraq. NSPD-1 established the process for coordinating executive departments and agencies in the development and implementation of national security policies, which includes the interagency Principals Committee, Deputies Committee, and policy coordination committees. In May 2004, the President issued NSPD-36 to direct U.S. operations in Iraq following the transfer of sovereignty to the Iraqi government. This directive made State responsible for the direction, coordination, and supervision of all U.S. government employees, policies, and activities in Iraq, except those under the command of an area military commander or seconded to an international organization. According to the directive, the Commander of the U.S. Central Command—under the authority, direction, and control of the Secretary of Defense—continues to be responsible for U.S. efforts with respect to security and military operations in Iraq, including U.S. efforts in support of training and equipping Iraqi security forces. In April 2006, the U.S. embassy in Baghdad and the U.S.-led Multi-National Force-Iraq developed their first joint campaign plan for Iraq and issued a revision to their joint plan in July 2007. We found that NSPD-44, related State and administration guidance, and the planning framework collectively do not provide clear direction in three key areas. First, S/CRS’s roles and responsibilities conflict with those assigned to State’s regional bureaus and Chiefs of Mission in the Foreign Affairs Manual. Second, guidance is inconsistent regarding S/CRS’s responsibilities for conflict prevention efforts, which could compromise the office’s ability to fulfill its mandate. Third, the lack of a common definition for reconstruction and stabilization operations poses an obstacle to interagency collaboration. First, S/CRS’s roles and responsibilities conflict with those of State’s regional bureaus and Chiefs of Mission. In October 2005, we reported that collaborating agencies must agree on how to lead collaborative efforts. According to the Foreign Affairs Manual, each regional bureau is responsible for U.S. foreign relations with countries within a given region, including providing overall direction, coordination, and supervision of U.S. activities in the region. In addition, Chiefs of Mission have authority over all U.S. government staff and activities in their countries. As S/CRS initially interpreted NSPD-44, S/CRS’s roles and responsibilities included leading, planning, and coordinating stabilization and reconstruction operations; these responsibilities conflict with those of the regional bureaus and Chiefs of Mission. S/CRS officials stated that they expected the next version of the Foreign Affairs Manual to include a clearly defined and substantive description of the office’s roles. Second, guidance varies regarding S/CRS’s responsibility for preventing conflicts. NSPD-44 and the memo announcing S/CRS’s creation include conflict prevention as one of the office’s responsibilities. However, S/CRS’s authorizing legislation and the State memo aligning S/CRS with the Director of U.S. Foreign Assistance (DFA) do not explicitly include conflict prevention as a responsibility. Ambiguity about S/CRS’s prevention role could result in inadequate prevention efforts. A DOD official in the Global Strategic Partnerships office stated that responsibility for prevention is not currently assigned to anyone, and the work might not be done without such an assignment. Third, the lack of a common definition for reconstruction and stabilization operations poses an obstacle to effective collaboration under the framework. In our October 2005 report, we found that collaborative efforts require agency staff to define and articulate a common outcome or purpose. While the framework includes definitions for reconstruction and stabilization, it does not define what constitutes stabilization or reconstruction operations or explain how these operations differ from other types of military and civilian foreign assistance operations, such as counterinsurgency operations, counterterrorism operations, and standard development assistance. In addition, while S/CRS has developed a list of basic terms related to reconstruction and stabilization, staff from other bureaus and agencies had different definitions of these terms. As a result, it is not clear when agencies and bureaus are expected to apply the framework. S/CRS staff said that it is difficult to clearly define reconstruction and stabilization and difficult to determine when a response to a crisis constitutes a reconstruction or stabilization operation. Prior GAO work shows that the lack of a clear definition can pose an obstacle to improved planning and coordination of reconstruction and stabilization operations. In our previous report on DOD’s stability operations approach, GAO found that the lack of a clear and consistent definition of stability operations caused confusion among military planners and limited progress in strengthening stability-operations capability. State and other U.S. civilian agencies have concerns about the planning framework for three key reasons. First, some civilian interagency partners are concerned that S/CRS is assuming their traditional roles and responsibilities. Staff from one of State’s regional bureaus believed that S/CRS had enlarged its role in a way that conflicted with the Regional Assistant Secretary’s responsibility for leading an operation and coordinating with interagency partners. USAID staff noted how their agency had planned and coordinated reconstruction operations in the past and questioned why S/CRS now had these roles. Although most agency staff and outside experts we interviewed agreed that interagency coordination should improve, some USAID and State employees questioned why NSC was not given the primary role for planning and coordinating stabilization and reconstruction operations or for implementing NSPD-44. USAID and regional bureau staffs also said some aspects of the planning framework were unrealistic, ineffective, and redundant since interagency teams had already devised planning processes for ongoing operations in accordance with NSPD-1. For example, planning for U.S. assistance to Sudan and Darfur before 2005 was led by State’s Bureau of African Affairs. In 2005, S/CRS applied an early version of the planning guide to ongoing efforts in Sudan. USAID staff involved in both the regional bureau-led planning and S/CRS-led planning stated they were frustrated that S/CRS staff were not well-versed in Sudan policy and had to be educated before planning could occur. Other staff said S/CRS should focus more on filling the gaps in planning and operational mechanisms and focus less on policy development. Concerns about roles and responsibilities have led to confusion and disputes about who should lead policy development and control resource allocation. As a result, some of State’s regional bureaus have resisted applying the new interagency planning process to particular reconstruction and stabilization operations. S/CRS staff said one regional bureau discouraged the office’s involvement in a country that S/CRS identified as appropriate for the framework; another bureau is generally reluctant to allow S/CRS to participate in its efforts in the region. In addition, State and other agency staff said S/CRS had conflicts with DFA over which office controlled resource allocation for these operations. These disputes made it difficult for S/CRS to coordinate and plan reconstruction and stabilization operations using the framework. Second, some interagency partners stated that senior officials have provided limited support for S/CRS and its planning framework. In our October 2005 report, we stated that committed leadership from all levels of an organization is needed to overcome the barriers that exist when working across agency boundaries. Staffs from various State offices said senior officials did not communicate strong support for S/CRS or the expectation that State and interagency partners should follow its framework for planning and coordinating reconstruction and stabilization operations. In addition, S/CRS was not selected to lead planning for recent high-priority operations. When the office was created in 2004, S/CRS and other State officials agreed that it would not focus efforts in Afghanistan and Iraq because these operations had existing processes, and policymakers feared that the scope of those operations would overwhelm S/CRS. However, S/CRS has not been given key roles for operations that emerged after its creation, such as the ongoing efforts in Lebanon and Somalia, which several officials and experts stated are the types of operations S/CRS was created to address. These officials and experts stated that S/CRS has a large responsibility but little authority and no resources to achieve it. Third, interagency partners believe the planning process, as outlined in the draft planning guide, is too cumbersome and time consuming for the results it produces. Officials who participated in the planning for Haiti stated that the process provided more systematic planning, better identification of interagency goals and responsibilities, and better identification of sequencing and resource requirements. However, some officials involved in planning operations for Haiti and Sudan stated that using the framework was time consuming, involved long meetings and extra work hours for staff, and was cumbersome to use because it was overly focused on process details. Staff also said that, in some cases, the planning process did not improve outcomes or increase resources, particularly since S/CRS has few resources to offer. Other officials were frustrated when S/CRS processes were applied to interagency planning efforts that they believed were already functioning. As a result of these concerns, officials from some offices and agencies expressed reluctance to work with S/CRS on future reconstruction and stabilization plans. State is taking steps to strengthen the framework by revising and updating its draft planning guide based on feedback from other agencies and participants. S/CRS said it would commit to ensuring that the S/CRS- facilitated planning process is not duplicative or overly burdensome relative to its results and intends to provide assistance to State regional bureaus. S/CRS also said the revisions would provide more details about the framework’s implementation at the field level and metrics to assess progress. State officials also said S/CRS’s realignment under DFA would strengthen S/CRS’s control over reconstruction and stabilization resources. On March 12, 2007, the Secretary of State aligned S/CRS with DFA, while still maintaining a direct reporting relationship between S/CRS and the Office of the Secretary. DFA is charged with reorganizing U.S. foreign assistance and has authority over all State and USAID foreign-assistance funding and programs. However, it is not clear how the change will affect S/CRS’s role and the use of the framework. DFA has procedures and tools to guide the development of operational plans for foreign assistance, and its staff said some of those processes would likely be applied to S/CRS planning. According to S/CRS officials, S/CRS and DFA have recently developed a more productive working relationship than they had in the past. For example, the two organizations recently settled a dispute over funds State could receive from DOD under section 1207 of the National Defense Authorization Act for Fiscal Year 2006. This act authorized the Secretary of Defense to transfer up to $100 million per year in fiscal years 2006 and 2007 to State to be applied to stabilization and reconstruction operations. According to State and DOD staff, in 2006 only $10 million was transferred to State due to a dispute between S/CRS and DFA over which office controlled the money. However, according to the March 2007 memo aligning S/CRS with DFA, S/CRS would be responsible for overseeing the transfer and use of these funds. S/CRS provided documents that indicated that State had obligated approximately $99.7 million of the $100 million available under section 1207 for fiscal year 2007. This funding was applied to ongoing stabilization and reconstruction operations in Haiti, Nepal, Columbia, Yemen, and Somalia; to the Trans-Sahara Counterterrorism Partnership; and to infrastructure, economic development, rule of law programs, and counterterrorism activities in the Philippines, Indonesia, and Malaysia. In addition, S/CRS participated in DFA’s review of U.S. assistance to some countries for fiscal year 2008 and, as S/CRS acquires new staff, it plans to assume responsibility for the budget process of countries in DFA’s “rebuilding” category. Although S/CRS has not finished updating the framework guide or determined its role under DFA, it has taken other steps to strengthen the use of the framework and prepare interagency partners to coordinate effectively. For example, S/CRS offers Foreign Service Institute courses to train interagency participants in planning stabilization and reconstruction operations, leading and managing interagency coordination for such operations, and applying tools for early warning and conflict assessment. S/CRS reported that 352 federal employees participated in its training courses in 2006 and 452 employees participated in 2007. The majority of participants were from State, DOD, and USAID, although S/CRS reported that staff from seven other agencies also attended classes. Course instructors said it was difficult to attract participants from other agencies and described advertising to those agencies as ad hoc, in part because the Foreign Service Institute does not have an up-to-date list of contacts. S/CRS staff said they were exploring other strategies for recruiting course participants, such as identifying key agency leaders who agree that their staffs should attend. S/CRS also has developed tools and information to strengthen reconstruction and stabilization operations, such as information on guiding concepts and terms and tools for early warning and prevention, assessing best practices, and applying lessons learned. Although S/CRS made efforts to strengthen both coordination and the commitment of key DOD officials to the goals of S/CRS, several differences in military and civilian planning capacities and procedures pose obstacles to effective coordination. First, differences in planning capacities and resources make coordination difficult. In our report on DOD’s stability operations approach, we found that DOD and non-DOD organizations do not fully understand each other’s planning processes, and non-DOD organizations have limited capacity to participate in DOD’s full range of planning activities. State officials noted its planning differs from DOD’s; State is more focused on current operations and less focused on the wide range of potential contingency operations for which DOD must plan. State does not have a large pool of planners who can deploy to DOD’s combatant commands. DOD officials noted that their efforts to include non-DOD organizations in planning and exercise efforts were stymied by the limited number of personnel those agencies can offer. State officials indicated it does not have DOD’s capacity to staff operations and planning; both DOD and State staff doubted that civilian capacity and resources would ever match the levels desired. Second, State generally does not receive DOD military plans as they are being developed, which restricts its ability to harmonize reconstruction and stabilization efforts with military plans and operations as required by NSPD-44. DOD does not have a process in place to share, when appropriate, information with non-DOD agencies early in plan development without specific approval from the Secretary of Defense. DOD’s hierarchical approach limits interagency participation while plans are being developed by the combatant commands at the strategic, operational, and tactical levels. NSPD-44 working groups are developing a process for reviewing military plans, when appropriate, but are not yet ready to use it. Third, agency staff and outside experts have found that differences in organizational structure, terminology, and information systems pose obstacles to effective coordination between military and civilian agencies. For example, S/CRS found that differences between civilian agencies’ headquarters and field organization and the strategic, operational, and tactical organization of the military can make coordination more difficult. The Administration’s July 2007 report to Congress stated it was developing common standards and systems, including blogs and other technologies, to address inconsistencies in U.S. information management systems and to support interagency collaboration and communication. In our stability operations report, we recommended that the Secretary of Defense, in coordination with the Secretary of State, provide implementation guidance on the mechanisms needed to facilitate and encourage interagency participation in the development of military plans; develop a process to share planning information with non-DOD agencies early in the planning process, as appropriate; and orient DOD and non- DOD personnel in each agency’s planning processes and capabilities. In commenting on the report, DOD said it partially agreed with our recommendations but did not indicate the steps it would take to implement them. State has begun developing three civilian corps to deploy rapidly to international crises but has not addressed key details for establishing and maintaining these units. First, State created two units within the department—the Active Response Corps (ARC) and the Standby Response Corps (SRC)—and has collaborated with several other U.S. government agencies to create similar units. State and other agencies, however, face challenges in establishing these units, including (1) difficulties in achieving planned staffing levels for ARC and providing training opportunities for State’s SRC volunteers, (2) agencies’ inabilities to secure resources for operations not viewed as part of their core missions, and (3) the possibility that deploying volunteers could result in their home units having insufficient staff. Second, in May 2007, State began an effort to establish the Civilian Reserve Corps (CRC), which would be made up of U.S. civilians who have skills and experiences useful for stabilization and reconstruction operations, such as civil engineers, police officers, judges, and public administrators, that are not readily available within the U.S. government. If deployed, reservists would become federal employees. State, however, does not yet have congressional authority to establish the CRC or to provide the planned benefits package for CRC personnel. In addition, State has not clearly defined the types of missions for which CRC would be deployed. Further, State has estimated the costs for establishing and keeping CRC ready to deploy, including costs for recruiting, training, equipping CRC personnel, but these estimates do not include the costs of deploying CRC personnel to other countries or sustaining them once deployed. To meet NSPD-44 requirements for developing a strong civilian response capability, State and other U.S. agencies developed internal mechanisms to reassign personnel in support of stabilization and reconstruction operations. S/CRS has taken the lead in expanding State’s internal capacity to respond to conflict by creating ARC and SRC. S/CRS also collaborated with several other U.S. government agencies to initiate the development of ARC and SRC units within those agencies. In 2006, State developed ARC within S/CRS to deploy during the initial stage of stabilization and reconstruction operations. S/CRS has 15 temporary staff positions for ARC; ARC staff serve 1-year rotations. In October 2007, 10 of the 15 authorized positions were staffed. ARC staff deploy to unstable environments to assess countries’ or regions’ needs and help plan, coordinate, and monitor a U.S. government response. Since 2006, ARC staff have deployed to seven locations: (1) Sudan, to help implement the Darfur Peace Agreement; (2) Eastern Chad, to monitor the displacement of civilians resulting from the conflict in Darfur; (3) Lebanon, to assist with the evacuation of American citizens and to coordinate assistance immediately following the Israeli-Hezbollah conflict; (4) Kosovo, to help plan for a follow-on to the United Nations Mission to Kosovo; (5) Liberia, to coordinate reforms of the security sector; (6) Iraq, to assist with integrating new Provincial Reconstruction Team members; and (7) Haiti, to plan the implementation and oversight of programs to improve security, local government capacity, and economic opportunity in Cité Soleil. According to S/CRS, regional bureau staff, and State’s Office of the Inspector General, ARC involvement and performance in these operations has been positive. When not deployed, ARC members engage in training and other planning exercises and work with other S/CRS offices and State bureaus on related issues to gain relevant expertise. SRC would deploy during the second stage of a surge to stabilization and reconstruction operations. SRC works to support activities of ARC when additional staff or specialized skills are required. Unlike ARC, SRC does not have not dedicated staff positions. Rather, when not deployed, current employees on the SRC roster serve in other capacities throughout State. Currently, SRC is composed of about 90 State employees and 210 State retirees. In July 2007, NSC approved S/CRS plans to increase SRC to a roster of 500 volunteers government-wide by fiscal year 2008, and to a roster of 2,000 volunteers government-wide by fiscal year 2009. If called upon, SRC members would be available for deployment within 60 days and could be deployed for up to 6 months. According to S/CRS staff, the office aims to have up to one-quarter of this standby corps ready for deployment at any one time. However, to date, S/CRS has deployed SRC members to only two ongoing operations: one to Sudan in support of the Darfur Peace Agreement and one to Chad to support refugees from Eastern Darfur. Although S/CRS has started working with other U.S. agencies to establish units similar to ARC and SRC, these efforts are in very early stages. Currently, only USAID and the Department of the Treasury have established mechanisms for responding rapidly to stability and reconstruction missions. USAID uses the Office of Foreign Disaster Assistance and the Office of Transition Initiatives to respond to conflict situations. In addition, USAID has started developing its own internal surge capacity and has identified 15 staff available for immediate deployment to crises. USAID’s Bureau of Democracy, Conflict, and Humanitarian Assistance developed a proposal to create a civilian reserve office to respond to stabilization and reconstruction operations and requested funds to hire, train, equip, and deploy more than 50 staff specifically for this purpose. The Department of the Treasury’s Office of Technical Assistance has ongoing programs around the world and intends to build the capacity to lead long-term stability operations. In addition, the Office of Technical Assistance developed the First Responder Initiative in 2004, which includes approximately 30 staff who are willing to deploy rapidly to conflict areas in support of stabilization and reconstruction operations. State and other agencies face challenges in establishing their rapid response capabilities. These challenges include (1) difficulties in achieving planned staffing levels for ARC and providing training opportunities for State’s SRC volunteers, (2) agencies’ inability to secure resources for operations not viewed as part of their core missions, and (3) the possibility that deploying agency staff and SRC volunteers would result in staff shortages in their home units. S/CRS has had difficulty establishing positions and recruiting for ARC and training SRC members. S/CRS plans to increase the number of authorized staff positions for ARC from 15 temporary positions to 33 permanent positions, which State included in its 2008 budget request. However, according to S/CRS staff, it is unlikely that State will receive authority to establish all 33 positions. Although S/CRS has not had difficulty recruiting SRC volunteers, it does not presently have the capacity to ensure they are properly trained for participating in stabilization and reconstruction operations. ARC staff and SRC volunteers would be required to complete five courses offered jointly by S/CRS and the Foreign Service Institute. According to S/CRS staff, the Foreign Service Institute does not currently have the capacity to train the 1,500 new volunteers S/CRS plans to recruit in 2009. S/CRS is studying ways to correct the situation. Although other agencies have begun to develop a stabilization and reconstruction response capacity, most have limited numbers of staff available for rapid responses to overseas crises. Most agencies’ missions are domestic in nature. Nonetheless, domestic policy agencies, including the Departments of Homeland Security and Justice, operate overseas programs. However, officials from these agencies said international programs are viewed as extensions of their domestic missions. As a result, it is difficult for these agencies to secure funding for cadres of on- call first and second responders. Finally, State and other agencies said that deploying volunteers can leave home units without sufficient staff and, as a result, they must weigh the value of deploying volunteers against the needs of their units. For example, when not deployed to stabilization and reconstruction operations, current State SRC volunteers serve normal duty rotations at overseas posts or within State’s various bureaus and offices within the United States. According to State’s Office of the Inspector General, S/CRS has had difficulty getting State’s other units to release the SRC volunteers it wants to deploy in support of stabilization and reconstruction operations. The home units of the volunteers do not want to become short of staff or lose high-performing staff to other operations. Other agencies reported a reluctance to deploy staff overseas or establish on-call units because doing so would leave fewer workers available to complete the offices’ work requirements. Some civilian agencies recently agreed to identify, train, and deploy employees to stabilization and reconstruction operations provided that State fund the efforts. According to S/CRS staff, however, the training and deployment of non-State ARC and SRC would not begin until fiscal year 2009. In 2004, S/CRS developed an initial concept for CRC, which would be deployed in support of stabilization and reconstruction operations. CRC would be comprised of U.S. civilians who have skills and experiences useful for stabilization and reconstruction operations, such as civil engineers, police officers, judges, and public administrators, that are not readily available within the U.S. government. Reservists would serve 4- year terms of voluntary service and, if called upon, would deploy for rotations of up to 1 year. Reservists would remain in their daily jobs until called upon for service and would be ready for deployment within 30 to 60 days. Deployed CRC personnel would be classified as full-time term federal employees, with the authority to speak for the U.S. government and manage U.S. government contracts and employees. Volunteers would receive training upon joining CRC and would be required to complete annual training. In addition, they would receive training specific and relevant to an operation immediately before deployment. According to S/CRS staff, NSC has approved plans to develop a roster of 2,000 volunteers by fiscal year 2009; however, a BearingPoint study commissioned by S/CRS found that CRC would require at least 3,550 volunteers to respond to CRC goals. The BearingPoint study also noted that decisions about CRC’s roster size would likely evolve over time. In addition, a panel of experts convened by the Congressional Research Service concluded that the proposed roster may represent only a portion of what is likely required. The panel noted that simultaneously deploying CRC to two large and one small operation, as defined by BearingPoint, could require deploying the entire CRC roster. S/CRS staff said the office would assess whether to expand the roster in subsequent years. State cannot spend any funds for the CRC until Congress has authorized the CRC’s establishment. In 2007, Congress granted State the authority to reallocate up to $50 million of Diplomatic and Consular Programs to support and maintain CRC. However, the legislation specified that no money may be obligated without a subsequent act of Congress. Legislation that would authorize CRC is pending in both the Senate and the House of Representatives, but as of October 2007, neither chamber had taken action on the bills. In addition, State needs congressional authority to provide key elements of the planned compensation package for deployed volunteers. Under current plans, deployed volunteers would become full-time term federal employees and would receive compensation and benefits similar to those received by Foreign Service employees. Such compensation and benefits would include salary commensurate with experience; danger, hardship, and other mission-specific pays, benefits, and recruitment bonuses for hard-to-fill positions; overtime pay and compensatory time; leave accrual and payment for unused leave upon service completion; federal health, life, and death benefits, and medical treatment while deployed; dual compensation for retired federal workers; and the ability to count deployed time toward retirement benefits. The pending legislation would address some of the compensation authorities needed by State to offer the full proposed benefits package to CRC personnel. Specifically, it would authorize State to provide the same compensation and benefits to deployed CRC personnel as it does to members of the Foreign Service. However, the proposed legislation does not address whether deployed CRC personnel would have competitive hiring status for other positions within State or whether the time deployed would count toward government retirement benefits. In addition, deployed personnel would not have re-employment rights similar to those for military reservists. Currently, military reservists who are voluntarily or involuntarily called into service have the right to return to their previous place of employment upon completion of their military service requirements. However, the pending legislation to authorize CRC does not include similar rights for deployed CRC personnel. S/CRS staff said that the Civilian Reserve Task Force would assess whether re-employment rights are necessary based on the experience of recruiting the first 500 personnel. Further, S/CRS is moving the civilian reserve concept forward without a defined set of potential missions in which CRC would participate. According to S/CRS staff and pending legislation in the House and Senate that would authorize CRC, reservists would deploy to specific nonhumanitarian stabilization and reconstruction missions when called upon by the President. However, as with the planning guide and IMS, there is no agreed-upon definition for what constitutes a stabilization and reconstruction mission. S/CRS staff said they are still working through the conceptual differences between these and other types of operations, such as for counterinsurgency and counterterrorism, but that under its current approach, CRC could be deployed to almost any operation in a conflict zone. Although State has estimated some costs for establishing and sustaining CRC at home, the estimates do not include the costs of deploying CRC personnel to other countries or sustaining them once deployed. As shown in table 1, State has identified about $135 million in estimated costs for establishing and sustaining CRC at home during fiscal years 2008 and 2009. In comparison, Bearing Point’s study estimated that a 3-year startup period would cost approximately $341 million. Under current State plans, these funds would come from the fiscal year 2007 reallocation authority and from State’s fiscal year 2009 budget. The administration did not request any funds for CRC in fiscal year 2008. If Congress authorizes the CRC, State plans to obligate approximately $26 million of the $50 million authority in fiscal year 2007 supplemental funds to market the program and recruit, screen, and enroll the first 500 CRC personnel, including 350 with expertise in rule of law issues ($7.7 million); train the first 500 personnel ($5.1 million); purchase equipment such as armored vehicles, police weapons, electronics, cots, tents, and body armor ($2.3 million); administer CRC, such as establishing a home office and a U.S. Deployment Center, and hiring 37 new government staff and contractor positions to manage CRC’s day-to-day administrative functions ($6.4 million); and compensate CRC personnel when they are being trained ($4.2 million). State currently estimates that it will cost about $109 million to expand the CRC to 2,000 personnel in fiscal year 2009 (see table 1). In this phase, State would hire up to 26 additional administrative staff and provide training for the new CRC volunteers. As of October 2007, the Office of Management and Budget had not yet approved State’s request for $109 million. The actual funding request for 2009 may differ from these estimates. S/CRS estimates that the annual costs for sustaining at home a 2,000- volunteer CRC would be up to $47 million. According to S/CRS staff, these annual costs include the activities needed to ensure that CRC personnel are ready to deploy. However, they do not include costs for deploying CRC personnel outside the United States or sustaining them once overseas. Deployment and overseas sustainment costs could include security costs, which may be high in a conflict zone; salaries and allowances; operation and infrastructure costs, including for facilities; and life support, such as food, lodging, and medical support. Government personnel and outside experts in national security issues agree that the U.S. government must improve its capacity to plan for and execute stabilization and reconstruction operations. To address these issues, S/CRS and its interagency partners have worked to develop a new interagency planning and coordination framework and rapid response corps of civilian government and nongovernment personnel. Since the framework has never been fully applied, an understanding of its benefits and drawbacks remains unknown. However, concerns about roles and responsibilities and the value of the framework have slowed its acceptance by interagency partners. Although there is no requirement that NSC approve all elements of the framework, without such approval it will be difficult to ensure that U.S. government agencies collaborate and contribute to interagency planning efforts to the fullest extent possible. S/CRS has not completed developing plans to fully establish and maintain CRC, but is seeking authorization to begin recruitment of CRC volunteers. Although State received authority to reallocate up to $50 million for CRC, a separate act of Congress is required to authorize CRC before State may obligate that or future funding. S/CRS has developed a plan for using this funding to train, equip, and keep ready to deploy up to 2,000 CRC personnel by fiscal year 2009. However, costs of deploying CRC personnel to operations outside of the United States or of sustaining them at their new posts are not included. In addition, S/CRS has not yet specified types of missions for which the CRC would be used. Moreover, failure to provide full benefits and re-employment rights could affect State’s ability to recruit and retain personnel for CRC. These are critical elements for Congress to consider when debating the long-term commitment associated with authorizing CRC and the future oversight of CRC operations and effectiveness. To strengthen interagency planning and coordination of stabilization and reconstruction operations, we recommend that the Secretary of State clarify and communicate specific roles and responsibilities within State for S/CRS and the regional bureaus, including updating the Foreign Affairs Manual. In addition, we recommend that the Secretary, with the assistance of interagency partners, finish developing the framework and test its usefulness by fully applying it to a stabilization and reconstruction operation. To better understand the long-term fiscal and oversight commitments that would accompany authorizing CRC, when considering whether to grant such authority, the Congress should consider requiring the Secretary of State, in consultation with other relevant agencies, to report on the activities and costs required for its development; the administrative requirements and annual operating costs once it is established, including for sustainment at home, deployment, and sustainment once deployed; the types of operations for which it would be used; and potential obstacles that could affect recruitment, retention, and deployment of personnel. We received written comments on a draft of this report from the Department of Commerce (Commerce) and State (see appendixes II and III). In addition, State, DOD, and USAID submitted a joint statement to the draft report, which is included as part of State’s comments. The Departments of Agriculture (USDA), Commerce, Defense, Justice, and State and USAID also provided technical comments, which were incorporated into the report, as appropriate. The Departments of Homeland Security and the Treasury were provided copies of the draft report but did not comment. Commerce stated the report was a good overview of the new process for planning and coordinating stabilization and reconstruction operations, but did not comment on the report’s recommendations and matter for Congressional consideration. State said it partially concurred with our recommendations. It said that while it had no objections to the recommendations, it believes the progress made toward developing a civilian R&S capability was underreported. State said that the data GAO presented preceded a tremendous period of growth and change for the interagency process. In a joint statement, State, DOD, and USAID reiterated the draft report did not reflect the achievements made over recent months, including the IMS, ARC, SRC, and CRC. The joint statement did not comment on the report’s recommendations or matter for Congressional consideration. When providing technical comments, USDA, Justice, USAID each stated strong support for the new planning and coordination framework, and that they would continue to work with S/CRS to improve civilian deployment capabilities for stabilization and reconstruction operations. USAID further stated that more work is needed to clarify roles and responsibilities, particularly in the relationships between S/CRS and DFA, and between S/CRS and USAID. We disagree with the assertion that our draft report did not reflect changes that have occurred since the completion of our fieldwork. We completed our initial audit work in August 2007 and included in our draft report discussions and assessments on the framework elements NSC approved in March 2007 and on civilian response mechanisms. Our draft report did not include NSC-approved details for ARC, SRC, and CRC because those details were not provided until October 2007. We incorporated this new information into our final report, as well as other information from written and technical comments from six agencies. Our findings, conclusions, and recommendations reflect the status of the planning framework and CRC as of October 2007. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Agriculture, Commerce, Defense, Homeland Security, Justice, State, and the Treasury and to the Administrator for USAID. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-4128 or at christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To address both of the objectives of our review, we examined U.S. government documents and research and conducted more than 50 interviews with staff from 31 offices and bureaus at eight U.S. agencies with roles in reconstruction and stabilization operations (see table 2). We also interviewed staff members and reviewed reports and documents from eight U.S. government and independent research organizations. To determine the Department of State’s (State) efforts to improve interagency planning and coordination for stabilization and reconstruction operations, we interviewed current and former staff from the Office of the Coordinator for Reconstruction and Stabilization (S/CRS) and reviewed documentation on its development, roles, and responsibilities. Documents reviewed include Presidential Decision Directive 56, National Security Presidential Directives 1 and 44, Section 408 of the Consolidated Appropriations Act of 2005, the Foreign Affairs Manual, and internal State reports and memos. We also reviewed documentation from and held discussions with S/CRS, State’s regional and program bureaus, other agencies, and public and private research institutions on the development of the new planning framework for stabilization and reconstruction operations. Topics reviewed and discussed included mechanisms for triggering the process, roles and responsibilities of various actors, the Interagency Management System, the new planning template, and monitoring and evaluation requirements. We also discussed planning efforts and, where possible, reviewed resultant plans for stabilization and reconstruction operations in Haiti, Sudan, and Kosovo with S/CRS, staff from State’s relevant regional bureaus, and the U.S. Agency for International Development. Finally, we participated in five training courses on stabilization and reconstruction planning developed and taught by S/CRS staff in conjunction with the Foreign Service Institute. To determine State’s efforts to improve the deployment of civilians to these operations, we reviewed documents and interviewed State and other agencies’ staffs about the existing internal capacity each has for supporting stabilization and reconstruction operations and the actions they are taking to develop rapid deployment units and capabilities. We reviewed the development of the Active Response Corps, Standby Response Corps, and Civilian Reserve Corps by interviewing State staff from S/CRS, regional bureaus, select program bureaus, and the Office of the Inspector General. We reviewed BearingPoint’s study for creating and maintaining the Civilian Reserve Corps (CRC) and S/CRS plans for implementing the study’s recommendations, we examined proposals and assessments prepared by the U.S. Institute for Peace, the Institute for Defense Analyses, and the Congressional Research Service, and we reviewed pending legislation in the Senate and House of Representatives that would authorize CRC. Finally, we discussed S/CRS’s civilian reserve concept with staffs from other agencies including the Departments of Agriculture, Commerce, Defense, Justice, and the Treasury, and the U.S. Agency for International Development, as well as with private research institutions, including the Brookings Institution, the Center for Strategic and International Studies, and the RAND Corporation. We conducted our review from July 2006 to October 2007 in accordance with generally accepted government auditing standards. We disagree with the assertion that our draft report did not reflect changes that have occurred since the completion of our fieldwork. We completed our initial audit work in August 2007 and included in our draft report discussions and assessments on the framework elements NSC approved in March 2007 and on civilian response mechanisms. Our draft report did not include NSC-approved details for ARC, SRC, and CRC because those details were not provided until October 2007. We incorporated this new information into our final report, as well as other information from written and technical comments from six agencies. Our findings, conclusions, and recommendations reflect the status of the planning framework and CRC as of October 2007. We also have specific comments to points raised by State (see below). 1. While we are aware of the efforts S/CRS and regional bureaus have made in the countries cited, we note that S/CRS involvement in most of them includes the deployment of a small number of staff or the allocation of section 1207 funds, which we recognized in the report. We also note in the report that S/CRS applied its draft planning guide to operations in Haiti and Sudan, and we note the outcomes of those plans. We also report that interagency staff involved in those efforts had different points of view on the merits of the planning process, that the planning guide is still in development, and that S/CRS is revising the planning guide based on partners’ concerns. 2. We chose to discuss CRC separately because of the potential costs associated with its development and sustainment. However, we acknowledge within the report that State views civilian response mechanisms—ARC, CRC, and SRC—as the fourth major element of the framework. 3. We have changed the text in our report to reflect State’s comment. 4. We reported on the basic structures of the IMS. We note that it is designed to ensure coordination between Washington and the field, and between the civilian and military sectors of government. However, since IMS has never been used, it is premature to state whether it is an effective tool. We found, however, that different documents outline different roles and responsibilities for S/CRS. While State and S/CRS have taken some steps to clarify S/CRS’ role, some interagency partners stated more must be done. For example, when providing comments on a draft of our report, USAID stated it would like more definition on the relationships between S/CRS and DFA, and S/CRS and USAID. State would seem to agree with this assessment since it plans to use exercises to identify gaps and clarify roles and responsibilities. Although we are encouraged that State plans to take these actions, we believe the true test of IMS’s effectiveness will come when it is applied to an actual operation. 5. We reported on the procedures for triggering the use of IMS and, once finalized, the planning guide. As with IMS, the true test of the effectiveness of these procedures will come when it is used for an actual operation. 6. We reported on the ongoing development of the draft planning guide, including its features; its use for planning operations in Haiti, Sudan and Kosovo; and revisions S/CRS is making based on partners’ concerns. We also note that although NSC need not approve this element, such approval would add credibility to the guide and the framework, as a whole. 7. Based on these comments and technical comments from State, we updated information on State’s plans for establishing CRC, including startup costs, annual costs, and authorizing legislation. We acknowledge that NSC approved a plan to establish by 2009 a roster of 2,000 CRC reservists who would deploy to stabilization and reconstruction operations. Although we constrain our discussion to higher-level considerations, we are encouraged by the list of achievements State says it recently made; however, we note that a number them are still in the draft or conceptual stage of development. In addition, we removed from the final report discussion on the punitive actions State could take against volunteers who refused to deploy. 8. We did not state that lists of possible locations for deployment of CRC volunteers should be maintained. Our finding and conclusion pertain to the lack of clarity for the type of operations for which CRC would be used. As stated in the report, State has not clarified how stabilization and reconstruction operations differ from other operations, such as counterinsurgency and counterterrorism, or traditional development assistance programs. Having a clear definition of the types of operations CRC volunteers could deploy to would enable State to better define the skill mix it needs for the CRC roster. It would also provide a basis for Congressional oversight and a valuable check against potential misuse. 9. We updated our discussions or ARC and SRC based on information provided in these and other technical comments. We are encouraged that S/CRS has developed these courses—five of which GAO staff attended. However, based on our findings, we are concerned that S/CRS and the Foreign Service Institute may not have the capacity to provide full training to 3,000 SRC and CRC volunteers in fiscal year 2009. 10. We adjusted the report to reflect this new information. We note that approximately $80 million of the $99.75 million was obligated in the final month of the fiscal year. 11. We do not challenge States interpretation that the legislation implicitly authorizes S/CRS to engage in conflict prevention activities. Our point was to show that ambiguities between the sources of S/CRS authorities can lead to confusion among partners over S/CRS’s true roles and responsibilities. 12. Although agencies may have official positions that they support S/CRS and the new framework, our fieldwork revealed that many individuals within State’s regional and program bureaus and other agencies have not yet accepted it. 13. We disagree with the assertion that our report does not reflect changes that occurred since the completion of our fieldwork. We completed our initial audit work in August 2007, and in October 2007 we obtained and incorporated additional information from agencies written and technical comments on a draft of our report. Our report reflects the status of the framework and development of civilian response capabilities as of October 2007. In addition to the individual contact named above, Judith McCloskey, Assistant Director; Sam Bernet; Lynn Cothern; Marissa Jones; and Sona Kalapura made key contributions to this report. Technical assistance was provided by Joseph Brown, Debbie Chung, Martin De Alteriis, Mark Dowling, Holly Dye, Francisco Enriquez, Timothy Fairbanks, Etana Finkler, Bradley Hunt, Marisela Perez, Nina Pfeiffer, and Jeremy Sebest.
In 2004, the Department of State created the Office of the Coordinator for Reconstruction and Stabilization to coordinate U.S. planning and implementation of stabilization and reconstruction operations. In December 2005, President Bush issued National Security Presidential Directive 44 (NSPD-44), charging State with improving coordination, planning, and implementation of such operations and ensuring that the United States can respond quickly and effectively to overseas crises. GAO was asked to report on State's efforts to improve (1) interagency planning and coordination for stabilization and reconstruction operations, and (2) deployment of civilians to these operations. To address these objectives, we conducted interviews with officials and reviewed documents from U.S. agencies and government and private research centers. The office of the Coordinator for Reconstruction and Stabilization (S/CRS) is developing a framework for planning and coordinating U.S. reconstruction and stabilization operations. The National Security Council (NSC) has adopted two of three primary elements of the framework--the Interagency Management System and procedures for initiating the framework's use. However, the third element--a guide for planning stabilization and reconstruction operations--is still in progress. We cannot determine how effective the framework will be because it has not been fully applied to any stabilization and reconstruction operation. In addition, guidance on agencies' roles and responsibilities is unclear and inconsistent, and the lack of an agreed-upon definition for stabilization and reconstruction operations poses an obstacle to interagency collaboration. Moreover, some interagency partners stated that senior officials have shown limited support for the framework and S/CRS. Some partners described the new planning process, as presented in early versions of the planning guide, as cumbersome and too time consuming for the results it has produced. S/CRS has taken steps to strengthen the framework by addressing some interagency concerns and providing training to interagency partners. However, differences in the planning capacities and procedures of civilian agencies and the military pose obstacles to effective coordination. State has begun developing three civilian corps that can deploy rapidly to international crises, but key details for establishing and maintaining these units remain unresolved. First, State created the Active Response Corps (ARC) and the Standby Response Corps (SRC) comprised of U.S. government employees to act as first responders to international crises and has worked with several agencies to create similar units. However, these efforts are limited due to State's difficulty in achieving planned staffing levels for ARC, a lack of training available to SRC volunteers, other agencies' inability to secure resources for operations unrelated to their core domestic missions, and the possibility that deploying employees to such operations can leave units without sufficient staff. Second, in 2004, State began developing the Civilian Reserve Corps (CRC). CRC would be comprised of U.S. civilians who have skills and experiences useful for stabilization and reconstruction operations, such as police officers, civil engineers, public administrators, and judges that are not readily available within the U.S. government. If deployed, volunteers would become federal workers. S/CRS developed a plan to recruit the first 500 volunteers, and NSC has approved a plan to increase the roster to 2,000 volunteers in 2009. In May 2007, State received the authority to reallocate up to $50 million to support and maintain CRC, but it does not yet have the authority to obligate these funds. In addition, issues related to volunteers' compensation and benefits that could affect CRC recruitment and management would require congressional action. Furthermore, State has not clearly defined the types of missions for which CRC would be deployed. State has estimated the costs to establish and sustain CRC at home, but these costs do not include costs for deploying and sustaining volunteers overseas.
NNSA, a separately organized agency within DOE, is responsible for the management and security of the nation’s nuclear weapons, nonproliferation, and naval reactor programs. To conduct these activities, NNSA’s fiscal year 2005 request is about $9 billion, with about $6.6 billion targeted for nuclear weapons programs managed by NNSA’s Office of Defense Programs. For many years, various external studies have found problems with the organization of NNSA’s principal activity—the Office of Defense Programs. For example, one such study found a dysfunctional management structure with convoluted, confusing, and often contradictory reporting channels, while another study cited ambiguities and overlaps in the roles of headquarters and the Albuquerque Operations Office as a primary source of inefficiencies and conflict within the program. In December 2000, we reported organizational problems at three levels—within the Office of Defense Program’s headquarters functions, between headquarters and the field offices, and between contractor-operated sites and their federal overseers. These problems resulted in overlapping roles and responsibilities for the federal workforce overseeing the nuclear weapons program and confusion and duplication of effort for the contractors implementing the program at sites within the nuclear weapons complex. In December 2002, NNSA formally announced the beginning of an overall reorganization and workforce reduction intended to enhance its operational efficiency and programmatic effectiveness. Prior to its December 2002 reorganization, NNSA’s organization consisted of multiple layers. In particular, under the Office of Defense Programs—NNSA’s largest program—seven area offices reported to three operations offices that in turn reported to the Deputy Administrator for Defense Programs. The Deputy Administrator then reported to the Administrator. Figure 1 shows NNSA’s prior organization. To remove a layer of management, NNSA closed the Albuquerque, Oakland, and Nevada operations offices. The new organization consists of eight site offices located at each of NNSA’s major contractors, one service center located in Albuquerque, New Mexico, and headquarters program offices that all report directly to the Administrator. NNSA headquarters sets requirements, defines policies, and provides high-level guidance. Site office managers are the designated contracting officers responsible for delivering federal direction to the contractor at each site and for ensuring the site’s safe and secure operation. The site office managers also manage each NNSA site office. Under the realignment, a single service center has been established in Albuquerque, New Mexico, to provide business and technical support services to the eight site offices and headquarters programs. Prior to the reorganization, about 200 staff provided these services in the Oakland and Nevada operations offices and in offices in Germantown, Maryland, and Washington, D.C. These services are now being consolidated in the new service center, resulting in the reassignment of the 200 staff to the Albuquerque service center. Figure 2 shows NNSA’s new organization structure. NNSA plans to staff the service center with 475 employees, down from 678 in December 2002. As part of its reorganization, NNSA decided to reduce the size of its federal staff. Originally, NNSA set an overall staff reduction target of 20 percent. However, in August 2003, NNSA reduced the target to 17 percent. The current target includes a 26 percent reduction at headquarters and a 30 percent reduction at the service center. Three site offices—Kansas City, Nevada, and Savannah River—are experiencing reductions, although overall staff size at all eight site offices will increase by 16 employees. NNSA is relying on a combination of buyouts, directed reassignments, and attrition to achieve these targets by its September 30, 2004, deadline. Standards that we have developed require federal agencies to establish and maintain an effective system of internal controls over their operations. Such a system is a first line of defense in safeguarding assets and preventing and detecting errors. Under our standards, managers should, among other things, ensure that their staffs have the required skills to meet organizational objectives, that the organizational structure clearly defines key areas of authority and responsibility, that progress be effectively measured, and that operations be effectively monitored. In addition to these internal control standards, in January 2001, and again in January 2003, we identified strategic human capital management as a governmentwide, high-risk area after finding that the lack of attention to strategic human capital planning had created a risk to the federal government’s ability to perform its missions economically, efficiently, and effectively. In that context, we have stated that strategic workforce planning is needed to address two critical needs: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. There are five key principles that strategic workforce planning should address irrespective of the context in which the planning is done. It should involve top management, employees, and other stakeholders in developing, communicating, and implementing the strategic workforce plan; determine the critical skills and competencies that will be needed to achieve current and future programmatic results; develop strategies that are tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies; build the capability needed to address administrative, educational, and other requirements important to support workforce planning strategies; and monitor and evaluate the agency’s progress toward its human capital goals and the contribution that human capital results have made toward achieving programmatic results. In light of shortcomings in strategic human capital management reported by us, the President’s Management Agenda identified strategic management of human capital as a governmentwide initiative. Established in August 2001, the President’s Management Agenda identified a strategy for improving the management and performance of the federal government. The agenda included five governmentwide initiatives: the strategic management of human capital, competitive sourcing, improved financial performance, expanded electronic government, and budget and performance integration. Regarding strategic management of human capital, two principals are considered central to its success. One, people are assets whose value can be enhanced through investment. As with any investment, the goal is to maximize value while managing risk. Two, an organization’s human capital approach should be designed, implemented, and assessed by the standards of how well they help the organization achieve results and pursue its mission. Effective strategic workforce planning is considered an essential element of strategic human capital management. Also called human capital planning, it focuses on developing long-term strategies for acquiring, developing, and retaining an organization’s total workforce (including full- and part-time federal staff and contractors) to meet the needs of the future. NNSA’s reorganization has resulted in some progress in delineating lines of authority between NNSA headquarters and its field offices, thus addressing some past problems; however, at the working level, NNSA has not formalized a program management structure that identifies its program managers or what their responsibilities and qualifications should be, particularly regarding their role in directing and overseeing contractor activity under its new organization. Furthermore, the reorganization has created gaps in the responsibility for important safety oversight that need to be addressed. Without first clarifying such key management issues, NNSA cannot, among other things, ensure the improved discipline and accountability it seeks in managing its programs. By delineating lines of authority between NNSA headquarters and its field offices, NNSA’s reorganization has addressed past problems, such as communications problems resulting from overlapping roles and responsibilities of the federal workforce overseeing the nuclear weapons program. For example, according to NNSA site office managers, the streamlined structure has improved vertical communication because communication channels between headquarters and the field are more direct and do not involve an extra layer of management in the operations offices. Site office managers also state that by now reporting directly to the NNSA Administrator’s office, the time required to make decisions has been reduced. In addition, the realignment provides NNSA site office managers with additional authority to manage contractors and assigns them responsibility for the day-to-day security and safety of contractor operations. As a result, it has strengthened the hand of local NNSA site office managers who now have the authority to shut down operations at the sites, if necessary, due to security or safety concerns. Despite this progress NNSA’s reorganization still suffers from two shortcomings. First, the reorganization plan does not yet fully delineate the authority and responsibility of program managers, who are responsible for ensuring that program goals and requirements are met, or reconcile these responsibilities with the mutual responsibilities of contracting officers and their designated representatives who manage the contract. Specifically, under the new reorganization, the contracting officer is responsible for appointing contracting officer representatives to carry out specific functions, such as monitoring, inspection, and other functions of a technical nature not involving a change in the scope, cost, or terms and conditions of the contract. These contracting officer representatives then assist in directing and overseeing the contractor for the programs that they represent. NNSA is attempting to improve program management accountability and discipline by requiring program managers to direct all work to the contractors through a contracting officer or a designated contracting officer representative instead of through the now defunct operations offices or by bypassing the formal contract administrators and informally directing the contractor, as was done in the past. NNSA’s policy on program management, however, is still being developed. NNSA’s Assistant Deputy Administrator for the Office of Program Integration told us that the exact number of program managers within the Office of Defense Programs has yet to be determined because disagreement exists within the program about who currently is or is not a program manager. Furthermore, NNSA has not yet articulated its qualification standards for program managers. These standards are important to program success. As we noted in our report on NNSA’s Stockpile Life Extension Program, problems with the W-87 refurbishment were caused, in part, because the assigned program manager was not qualified to perform all required tasks and was not enrolled in DOE’s project management qualification program. Senior NNSA officials in headquarters expect NNSA’s policy to be issued by May 2004, and implementation plans for this policy to be developed by summer 2004. NNSA officials told us that even after the policy is issued, its implementation is expected to take some time because it will likely require a change in the behavior and culture of program managers and the manner in which they operate. NNSA’s delay in issuing program management policy and appointing program managers is currently creating confusion. According to NNSA’s existing policy concerning the appointment of contracting officer representatives, headquarters-based program officials must first be designated as program managers before they can be designated as contracting officer representatives for a site. As a result, any uncertainty surrounding the number of program managers and their responsibilities has the potential to disrupt the appointment of contracting officer representatives. However, despite the present uncertainty surrounding the designation of program managers, site offices are appointing contracting officer representatives. For example, the Sandia Site Office appointed 25 of its 36 contracting officer representatives using available NNSA headquarters staff, as of June 2003. However, NNSA provided us with a list of its designated program managers as of December 2003 (the latest date for which data were available) that did not officially recognize 21 of the 25 headquarters-based contracting officer representatives that had been formally appointed by the Sandia Site Office. Until NNSA fully implements its policies to delineate program management authority and responsibility, it remains unclear under the new reorganization and management structure how program management authorities and responsibilities will be exercised in the day-to-day management of contractors and site operations and NNSA cannot ensure that the full discipline and accountability it seeks through its reorganization is fully achieved or that its long-standing organizational structure problems are corrected. The second outstanding problem with NNSA’s reorganization is that it has created gaps in the responsibility for safety oversight that need to be addressed. For example, managers at NNSA’s Pantex Site Office, which oversees the contractor operating the Pantex Plant—an assembly/disassembly plant for nuclear weapons in Amarillo, Texas— stated that authority and responsibility for certain safety-related oversight is unclear. Specifically, according to the Pantex Site Office manager, when the realignment abolished the Albuquerque Operations Office, it left a void regarding who would take over certain nuclear explosive safety oversight activities previously performed by that office. Among other things, nuclear explosives safety oversight includes activities such as evaluating the adequacy of controls associated with tooling, testers, and operational processes to prevent and/or minimize the consequences of an accident involving nuclear explosives. While NNSA’s Assistant Deputy Administrator for Military Application and Stockpile Operations—an NNSA program—assumed overall responsibility for nuclear explosive safety, NNSA has not resolved exactly who is to provide the day-to-day oversight previously conducted by the Albuquerque Operations Office. In this regard, the Pantex Site Office manager stated that there is no clear procedure for conducting oversight to ensure the prevention of deliberate, unauthorized use of a nuclear weapon—an important goal of NNSA. The Pantex Site Office manager—the risk acceptance official for the site— stated that he would therefore not authorize the continuation of certain work related to one current weapon system requiring use of a particular safety process. Furthermore, in October 2003, NNSA issued its safety- oriented “Functions, Responsibilities, and Authorities Manual” intended to clarify issues concerning delineation of authority. However, according to the Assistant Manager for Nuclear Engineering at the Pantex Site Office, the manual still does not clarify the authority and responsibility of nuclear explosives safety oversight. Senior NNSA headquarters officials stated that they are aware of problems concerning nuclear explosive safety oversight and that corrective action plans have been recently developed and are scheduled to be implemented through 2006. The Defense Nuclear Facilities Safety Board recently expressed broader concerns in a December 8, 2003, letter to NNSA’s Administrator that many orders, directives, standards, supplemental directives, and site office procedures, which had been issued to help ensure the safe operation of NNSA’s defense nuclear facilities, have not been modified to reflect current roles and responsibilities within NNSA. The board further stated that in some cases, particularly those involving supplemental directives that the now-defunct Albuquerque Operations Office had issued, the documents may no longer have a clear owner within the NNSA organization, and deviations from the processes that these directives prescribed are now becoming more frequent within NNSA. NNSA’s reorganization is not likely to ensure that it has sufficient staff with the right skills in the right places because NNSA chose to downsize its federal workforce without first determining what critical skills and capabilities it needed to meet its mission and program goals. Consequently, NNSA will not know the composition of its workforce until it completes the 17 percent workforce reduction on September 30, 2004— the deadline specified in the reorganization plan—and then determines the knowledge, skills, and capabilities of its remaining employees. Without a functional long-term workforce plan, NNSA runs the risk of facing further, more serious staff shortages or skill imbalances, thereby affecting its ability to adequately oversee its contractors. In December 2001, in addressing NNSA’s use of its excepted service authority, we reported that NNSA did not have the coherent human capital and workforce planning strategies it needed to develop and maintain a well-managed workforce over the long run. As a result, we recommended that NNSA not allocate any additional excepted service positions until it developed comprehensive human capital and workforce planning strategies. Subsequently, in February 2002, we testified that NNSA’s lack of a long-term strategic approach to ensure a well-managed workforce precluded it from identifying its current and future human capital needs, including the size of the workforce, its deployment across the organization, and the knowledge, skills, and abilities needed to fulfill its mission. Despite these earlier recommendations to develop thorough human capital and workforce planning strategies, NNSA embarked on a major initiative, expected to span nearly 2 years, not only to reorganize, but also to reduce the size of its workforce. NNSA’s December 2002 reorganization plan called for a reduction in its federal workforce from 1,695 to 1,356 staff, or a reduction of about 20 percent, by September 30, 2004. The planned 20 percent reduction involved a 29 percent reduction in headquarters staff, a 26 percent reduction in administrative support staff through the closure of the three operations offices and the consolidation of administrative support staff in a new Service Center, and a 6 percent reduction in Site Office staff. A senior NNSA official stated that “getting things done” was a primary factor in deciding to quickly implement the reorganization and workforce reduction. As such, NNSA officials stated that the staff reduction targets were based more on judgment than a rigorous workload analysis. A senior NNSA official explained that NNSA managers knew that there was work overlap and redundancy in the organization, but were concerned that a more formal, rigorous analysis of requirements or workload could hamper what they believed was an urgent need to achieve organizational realignment and workforce reduction results. The official also said that NNSA management had decided that if and when staffing changes became necessary, such adjustments would then be made. The NNSA Administrator implemented what it termed a managed staffing process soon after the workforce reduction target was announced in an effort to focus on its short-term staff reduction targets and deadline. He asked NNSA headquarters, service center, and site office managers to report their organization’s existing functions and staff in 2003, their anticipated changes to functions and associated staff requirements by the end of fiscal year 2004, and any staff surplus or deficit. Based on regular updates of this information, the NNSA Administrator has adjusted the total staff reduction target twice since December 2002, once in April 2003 and a second time in August 2003, to its current 17 percent target—primarily to accommodate an increase of 38 positions. This new target is to be accomplished by an increase of 23 positions in headquarters and 40 positions in the site offices, respectively, and a decrease of 25 positions at the Albuquerque Service Center. A February 2004 status report stated that NNSA created and staffed the 38 new positions to perform functions not previously identified, or for which original staffing targets were not adequate for mission accomplishment. NNSA is progressing towards its staff reduction targets and deadline primarily through buyouts, directed reassignments, and attrition combined with a freeze on hiring and promotions, although exceptions can be allowed to fill critical positions. A total of 174 staff have thus far taken the buyout, which could be as high as $25,000 per person depending on such factors as length of federal service and grade level. NNSA human capital managers report that 99 of the 200 administrative support staff in Oakland, Las Vegas, Germantown, and Washington, D.C., offices have formally stated that they would relocate to the Albuquerque Service Center. However, officials are not sure how many staff will actually relocate because, for example, they believe that some staff do not really want to relocate and are seeking alternative employment. As of March 6, 2004, NNSA is 13 staff short of achieving its 17 percent staffing reduction target. NNSA has also begun a number of specific workload reduction initiatives intended to accomplish its mission with fewer federal personnel. However, the outcome of these initiatives may not be known for some time so their affect on NNSA’s workforce capabilities both in the short-term and long- term cannot be predicted. For example, in the area of safety, NNSA reduced the number of Site Office Facility Representatives from 68 in December 2002 to 53 in December 2003. Site Office Facility Representatives are typically responsible for day-to-day oversight of contractor operations to ensure that the contractor’s work practices and performance are being completed in a safe and environmentally responsible manner. NNSA is pursuing changes to the Facilities Representative Program, among other things, to allow for greater coverage in areas of higher risk to the public, such as nuclear safety, and reduced coverage of standard industrial hazard facilities. NNSA is also considering shifting federal responsibility for employee safety to the contractor. While continuing to pursue its short-term workforce reduction goals, NNSA began to develop a framework to determine its long-term human capital needs. In December 2003, NNSA issued a workforce plan designed to comprehensively meet the requirements of DOE’s Human Capital Management Improvement Program and the strategic workforce planning aspect of the President’s Management Agenda. The framework specifically identified strategic workforce planning as a means to mitigate the impact of losing a large percentage of the NNSA workforce and as the process for ensuring that the right people with the right skills are in the right place at the right time. The workforce planning model for the longer term— Workforce Plan 2004—called for the analysis of present workforce competencies, the identification of competencies needed in the future, a comparison of future needs with the present workforce in order to identify competency gaps and surpluses, the preparation of plans for building the workforce needed in the future, and an evaluation process to ensure that the workforce planning model remains valid and that mission objectives are being met. Despite this effort, NNSA’s workforce plan is of limited usefulness because it depends on workforce data that are either already obsolete or not yet available. For example, the number, skill, position, and location of employees are a moving target and subject to continuous change until the downsizing effort is completed in September 2004. Furthermore, several NNSA site office managers acknowledged that their workforce focus has been on their short-term downsizing objective. A senior NNSA official agreed that the agency’s workforce planning needed to be more long-term, but added that under the circumstances of NNSA’s organizational downsizing, management primarily focused on meeting short-term needs. NNSA human capital officials also told us that NNSA’s decreased reliance on DOE for practically all human capital management, resulting from NNSA’s creation as a separately organized agency under DOE in 2000, required the building of a human resource structure, staff, and operation, which has taken some time to get up and running. NNSA plans to update information in its workforce plan, including its workforce composition and skills, as well as determine workforce needs for the long-term. With this information, NNSA can then conduct a skill gap analysis that is necessary to target recruitment, hiring, and training programs long-term. As we have found in other government agencies, by carrying out downsizing without sufficient consideration of the strategic consequences, NNSA runs the risk of not having the right skills in the right place at the right time, thereby affecting its ability to adequately oversee its contractors and ensure the safety and security of its various facilities in the future. The situation may be further exacerbated by the fact that, according to NNSA estimates, 35 percent of NNSA employees will be eligible to retire in the next 5 years. The lack of adequate strategic and workforce planning in the course of downsizing efforts can negatively affect the agency’s ability to provide quality service and lead to such negative effects as the loss of institutional memory and an increase in work backlogs. The impact of gaps in the numbers and skills of staff used to carry out its contractor oversight mission is already becoming apparent. For example, NNSA site offices are 39 staff short of their targets and some site offices, namely Pantex, Y-12, and Los Alamos, are having some difficulty filling critical skills in safety and security. At the Albuquerque Service Center, significant skill gaps exist for accountants and contract specialists. For example, the service center has only 26 of 54 contract specialist positions filled. NNSA’s preoccupation with more short-term downsizing objectives and staffing strategy without the benefit of a strategic human capital plan may have contributed to the workforce imbalances it now is experiencing. NNSA’s implementation of its proposed risk-based approach to rely more on contractors’ assurances and self-assessments and less on NNSA’s direct oversight may be premature because NNSA’s reorganization has not yet established a program management structure or long-term workforce plan for ensuring that it has sufficient staff with the right skills in the right places. Others and we have reported on a number of problems over the years related to NNSA’s performance of effective federal oversight of its contractors. Against this backdrop, NNSA has begun taking steps to accommodate implementation of the new contractor oversight approach in parallel with its reorganization. Under this new approach, contractors will develop comprehensive contractor assurance systems, or systems of management controls, and NNSA will primarily rely upon these systems and controls to ensure that contractors properly execute their missions and activities. Although the overall concept of a risk-based approach to federal oversight has merit, the unresolved issues stemming from NNSA’s major ongoing reorganization may compromise its ability to effectively carry out this approach while successfully meeting its responsibility for safe and secure operations. NNSA’s reliance on contractors to operate its facilities and carry out its missions makes effective oversight of contractor activities critical to its success. Over the years, we have reported on problems related to NNSA’s performance of effective federal oversight of its contractors. For example: In May 2003, we reported on problems with NNSA’s oversight, particularly regarding assessing contractors’ security activities. We noted that, without a stable and effective management structure and with ongoing confusion about security roles and responsibilities, inconsistencies had emerged among NNSA sites on how they assessed contractors’ security activities. Consequently, we stated that NNSA could not be assured that all facilities are subject to the comprehensive annual assessments that DOE policy requires. Weaknesses in NNSA oversight also occurred at the Lawrence Livermore National Laboratory. Specifically, in our May 2003 report on a new waste treatment facility at the laboratory, we concluded that a delay in initiating storage and treatment operations at the new facility occurred because NNSA managers did not carry out their oversight responsibilities to provide clear requirements and ensure contractor compliance with these requirements. In July 2003, we reported on problems with NNSA’s oversight, particularly with regard to cost and schedule, of the Stockpile Life Extension Program. In particular, we found that Life Extension Program managers used reports that contained only limited information on cost growth and schedule changes against established baselines. We also found that program managers believed that they had not been given adequate authority to properly carry out the life extensions. In February 2004, we reported on problems with NNSA’s oversight with regard to business operations at the Los Alamos National Laboratory. Beginning in the summer of 2002, a series of problems with business operations surfaced at the Los Alamos National Laboratory, raising questions about the effectiveness of controls over government purchase cards and property. Among the questions raised were allegations of fraudulent use of government purchase cards and purchase orders, concerns about the adequacy of property controls over items such as computers, and disputed rationales for the laboratory’s firing of two investigators. DOE and NNSA identified multiple causes for these business operations problems, one of which was that NNSA’s oversight was too narrowly focused on specific performance measures in the contract rather than on overall effectiveness. In addition to these concerns, DOE ‘s Office of Inspector General has raised broader concerns about the adequacy of oversight. For example, in November 2003, DOE’s Office of Inspector General released its annual report on management challenges, including oversight of contracts and project management as two of three internal control challenges facing the department. Against this backdrop and in the midst of a major reorganization and staff reduction effort, NNSA is proposing to change its contractor oversight approach. NNSA’s August 2003 draft Line Oversight and Contractors’ Assurance System policy would rely more on contractor self-assessment and reporting, among other methods, and less on NNSA’s direct oversight. The proposal would require a comprehensive contractor assurance system, or system of management controls, to be in place and would primarily rely upon these systems and controls to ensure that its missions and activities are properly executed in an effective, efficient, and safe manner. NNSA would use a risk-based, graded approach to its oversight and tailor the extent of federal oversight to the quality and completeness of the contractors’ assurance systems and to evidence of acceptable contractor performance. NNSA’s oversight functions would include review and analysis of contractor performance data, direct observations of contractor work activities in nuclear and other facilities, annual assessments of overall performance under the contract, and certifications by the contractor or independent reviewers that the major elements of risk associated with the work performed are being adequately controlled. NNSA stated in its draft policy and in public meetings before the Defense Nuclear Facilities Safety Board that the department plans to phase in this new oversight approach over the next few years. NNSA has already begun taking steps to accommodate implementation of the new contractor oversight approach in parallel with its reorganization. For example, the new contract effective October 1, 2003, between Sandia Corporation and NNSA’s Sandia Site Office describes 10 key attributes for its assurance system, such as having rigorous, risk-based, and credible self-assessments, feedback, and improvement activities, and using nationally recognized experts and other independent reviewers to assess and improve its work process and to carry out independent risk and vulnerability studies. Sandia’s contractor plans to implement “assurance systems” beginning with its low-risk activities in fiscal year 2004, and medium- and high-risk activities in fiscal year 2005. Once satisfied that the contractor’s assurance system is effective and results in an improvement in the contractor’s performance in key functional areas, NNSA will consider conducting oversight at the assurance systems level rather than at the level of individual transactions. At the time of our review, NNSA officials at the Sandia Site Office did not know how they would assess or validate the contractor assurance system or what level of assurance they would require before they would shift from “transactional” oversight to “systems level” oversight. Although the overall concept of a risk-based approach seems reasonable, we are concerned about NNSA’s ability to effectively carry it out. For example, considerable effort is needed at the Los Alamos and Lawrence Livermore National Laboratories to successfully implement a risk-based approach to laboratory oversight. According to the Associate Director for Operations at the Los Alamos National Laboratory, the laboratory’s ability to manage risk is at a beginning level of maturity. Other officials at the Los Alamos laboratory, including officials from the Performance Surety Division and the Quality Improvement Office, said that the laboratory and NNSA have different perceptions of risks at the laboratory and how to manage those risks. In our February 2004 report, we expressed concerns about NNSA’s oversight approach and warned that such autonomy for the laboratories was inadvisable this soon into the process of recovery from a string of embarrassing revelations. We recommended that NNSA needs to maintain sufficient oversight of mission support activities to fulfill its responsibilities independently until the laboratories have demonstrated the maturity and effectiveness of contractor assurance systems and the adequacy of the contractor’s oversight have been validated. NNSA disagreed with our view of its proposal to rely more on a contractor’s system of management controls and less on NNSA’s own independent oversight, but acknowledged that there have been problems with oversight in the past. NNSA officials remained convinced that the proposed risk- based approach will be successfully implemented, resulting in improved contractor oversight. We continue to be concerned about whether NNSA is ready to move to its proposed system. For example, during this review, officials from NNSA’s Nevada Site Office expressed concerns about the performance of the management and operating contractor for the Nevada Test Site, citing repeated problems with contractor’s compliance with basic procedures. For example, officials from NNSA’s Nevada Site Office expressed concern that there were repeated incidents where the contractor did not follow lock-out/tag-out procedures, resulting in, for example, the contractor drilling holes into wires that would cause power systems to shut down. Furthermore, the Defense Nuclear Facilities Safety Board, in recent public meetings, has expressed concerns about nuclear safety under the proposed NNSA contractor assurance policy and said that NNSA should not delegate responsibility for such an inherently high-risk area of operations. Finally, because NNSA has not fully determined (1) who will give program direction to its contractors and (2) through a comprehensive workforce plan, that it has sufficient staff with the right skills in the right places, NNSA’s proposed approach to rely more on contractors’ assurances and self-assessments and less on NNSA’s direct oversight may be premature. NNSA is concurrently making significant and fundamental changes to its organization, workforce composition, and contractor oversight approach that require careful management forethought, strategy, and analysis. Preliminary indications are that some of these changes have had a positive effect on certain aspects of NNSA, but the final impact of these changes will not be apparent for several years. Specifically, NNSA’s reorganization has resulted in some progress in delineating authority and improving communication between headquarters and the field. However, the reorganization has not resolved confusion regarding authority over program management. In addition, by downsizing its federal workforce without first determining what critical skills and capabilities it needed, NNSA’s workforce reduction targets were more arbitrary than data-driven, contributing to short-term skill imbalances and making data-driven workforce planning for the longer term more difficult. Specifically, NNSA cannot begin to conduct a formal, substantive skill gap analysis to plan for the long term until it completes the current workforce reduction and collects critical workforce data on knowledge, skills, and competencies, among other things. Finally, because important program management and workforce issues still need to be resolved, NNSA’s implementation of its proposal to rely more on contractors’ assurances and self-assessments and less on NNSA’s direct oversight appears to be premature. In order to increase the likelihood that NNSA’s reorganization will achieve NNSA’s goal of increased management discipline and accountability in program management and contractor oversight, we are making three recommendations to the NNSA Administrator and the Secretary of Energy: establish a formal program management structure, policy, and implementation guidance for directing the work of its contractors, especially concerning how program managers will interact with contracting officers at site offices to help direct and oversee contractor activity; complete and implement data-driven workforce planning for the longer term that (1) determines the critical skills and competencies that will be needed to achieve current and future programmatic results, including contractor oversight; (2) develops strategies tailored to address gaps in number, skills and competencies, and deployment of the workforce; and (3) monitors and evaluates the agency’s progress toward its human capital goals and the contribution that human capital results have made toward achieving programmatic results, and postpone any decrease in the level of NNSA’s direct federal oversight of contractors until NNSA has a program management structure in place and has completed its long-term workforce plan. We provided NNSA with a draft of this report for review and comment. NNSA agreed in principle with our recommendations; however, it felt that it already had efforts underway to address them. Specifically, with respect to our recommendation about program management, NNSA stated that it has established a formal process for using appropriately designated officials to direct contractor activity and that its formal program management policy was nearly established. We recognize in our report NNSA’s effort to develop processes and formalize its program management policy; however, we believe that NNSA needs not only a policy, but also a structure and implementation guidance so that the managers providing direction to NNSA’s contractors are clearly identified and can be held accountable. With respect to our recommendation on workforce planning, NNSA agreed with our recommendation, but it disagreed that its current plan was based on short-term or arbitrary management judgments. In this respect, our conclusions were based on discussions with knowledgeable senior agency officials at NNSA headquarters and site offices as well as a review of NNSA management council minutes. More importantly, we continue to believe in, and NNSA does not dispute, the need for a long-term data driven workforce plan that will ensure that NNSA meets its long-term goals. Finally, regarding our last recommendation on federal oversight of contractors, NNSA stated that it had no intention of further decreasing direct oversight of contractors, was hiring staff to fill vacant positions at site offices, and that its proposed contractor assurance systems would only be implemented after a site manager/contracting officer was convinced that the contractor’s system would be at least as effective as the current system. While we are pleased that NNSA has stated that it will not decrease its direct oversight, our recommendation is intended to ensure that NNSA has the critical systems it needs in place to perform its function—effective, direct federal oversight. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies to the Secretary of Energy and the Administrator of NNSA, the Director of the Office of Management and Budget, and appropriate congressional committees. We will make copies available to others on request. In addition, the report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Major contributors to this report are listed in appendix II. In addition to the individual named above, Arturo Holguin, Robert Kigerl, Jonathan McMurray, Christopher Pacheco, Anthony Padilla, Judy Pagano, and Ellen Rubin made key contributions to this report.
The National Nuclear Security Administration (NNSA), a separately organized agency within the Department of Energy (DOE), is responsible for the management and security of the nation's nuclear weapons, nonproliferation, and naval reactor programs. NNSA oversees contractors that operate its facilities to ensure that activities are effective and in line with departmental policy. In December 2002, NNSA began implementing a major reorganization aimed at solving important long-standing organizational issues. GAO reviewed NNSA's overall reorganization efforts to assess (1) the extent to which it is addressing in practice the past problems concerning the unclear delineation of authority and responsibility, (2) workforce planning, and (3) its impact on federal oversight of contractor activities. NNSA's reorganization has addressed some past problems by better delineating lines of authority and improving communication; however, NNSA has not formalized a program management structure that identifies program managers or details their responsibilities and qualifications as they relate to the direction and oversight of contractor activity under the new organization. Without first resolving such key management issues, NNSA cannot, among other things, ensure the improved discipline and accountability it seeks in managing its programs. NNSA's reorganization is not likely to ensure that the agency has sufficient staff with the right skills in the right places because NNSA downsized its federal workforce without first determining the critical skills and capabilities needed to meet its mission and program goals. Consequently, NNSA will not know the composition of its workforce until it completes the 17 percent workforce reduction on September 30, 2004--the deadline specified in the reorganization plan--and then determines the knowledge, skills, and capabilities of its remaining employees. Without a functional long-term workforce plan, NNSA runs the risk of facing further, more serious staff shortages or skill imbalances, thereby diminishing its ability to adequately oversee its contractors. NNSA's implementation of a proposed risk-based approach to rely more on contractors' assurances and self-assessments and less on NNSA's direct oversight may be premature because it has not yet established a program management structure or long-term workforce plan for ensuring sufficient staff with the right skills in the right places. Under this proposal, contractors will develop comprehensive assurance systems, or systems of management controls, and NNSA will primarily rely upon these contractor systems and controls to ensure that contractors properly execute their work. Although the overall concept of a risk-based approach to federal oversight has merit, NNSA's proposed transition to conduct less direct federal oversight could be compromised by outstanding reorganization issues.
DOD frequently purchases products that are not available in the commercial marketplace. For example, DOD awards contracts for the development or production of weapon systems including fighter aircraft, submarines, and radars. In these situations, DOD typically negotiates the price it will pay based on the cost to deliver the desired product. In negotiating prices, DOD seeks insight into contractors’ costs such as for salaries, wages, and benefits, the last of which includes pensions. When a purchase occurs in the commercial marketplace, the price for a commercial product will include the sellers’ costs for materials and labor (including salaries, wages, and benefits) but the buyer has little insight into these costs. Contractors’ labor costs include pension benefits, since such benefits are a normal part of compensation. These pension costs are an indirect cost spread across multiple contracts at a particular contractor business division, as opposed to direct costs such as those labor and material costs that can be associated with a specific contract (see figure 1). They are typically allocated to contracts based on direct labor costs. Pension costs are generally considered a fringe benefit, a category that includes costs associated with health benefits, group insurance, and other forms of nonwage compensation. In previous work, we found that of the three main types of employee benefits (health insurance, paid leave, and retirement benefits), health insurance is generally the most expensive, and retirement benefits the least. Oversight of DOD contracts is primarily provided by two agencies: 1. Defense Contract Management Agency (DCMA), which includes contracting officers who, as part of their duties, negotiate and agree upon indirect costs applied to contracts awarded by DOD acquisition commands and other buying offices. 2. Defense Contract Audit Agency (DCAA), which audits projected and actual costs associated with DOD contracts to ensure they are allowable, allocable, and reasonable in accordance with CAS and Federal Acquisition Regulation (FAR) rules. These indirect cost oversight processes are not limited to DOD, but apply to U.S. government contracts more broadly. DCMA and DCAA provide assistance related to indirect cost oversight for non-DOD agencies, such as the National Aeronautics and Space Administration (NASA) and the Department of Homeland Security. When DCMA, as the cognizant contract administration office, reaches an agreement on indirect costs, the agreement is applicable to all U.S. government contracts performed by that business unit. DOD contractors, like other private sector companies, may sponsor traditional defined benefit plans, or defined contribution plans (such as 401(k) plans) which provide individual accounts for employees and allow for employer and employee contributions. They may sponsor multiple defined benefit plans, which typically cover different business lines or employee populations, such as salaried or hourly employees. Many defined benefit and defined contribution plans sponsored by DOD contractors are “tax-qualified” under the Internal Revenue Code. Tax qualification means the plans meet certain rules in the Internal Revenue Code, and have certain tax advantages. Minimum funding rules—that is, the amount required to be held in the trust fund to finance future benefit payments—are contained in the Internal Revenue Code and mirrored in ERISA, and apply to private sector “tax-qualified” defined benefit plans. Note that sponsor contributions to these plans are not the benefit payments themselves, but contributions that go into a trust fund, grow with investment returns, and eventually are paid out as benefits at a later date. These contributions are tax-deductible to the sponsoring company, investment returns on the trust fund is tax-deferred, and plan participants pay tax only as they receive benefits in retirement. DOD contractors also sponsor “nonqualified plans.” meet the applicable requirements for tax-qualification under the Internal Revenue Code. Sponsors of these plans typically do not have to satisfy laws and regulations capping maximum benefits or requiring a minimum level of contributions to the plan. They also do not have to meet certain reporting, disclosure, bookkeeping, and core fiduciary requirements. 26 U.S.C. § 409 A. Nonqualified plans are typically designed for highly compensated employees or selected company executives. Nonqualified plans may be operated on a pay-as-you-go basis or may be prefunded. Nonqualified plans do not have a minimum ERISA contribution, and, for pay-as-you-go plans, the CAS pension cost will be the cost of the benefit payments to the participants. Defined contribution plans shift investment risk away from the employer and onto employees, meaning that these plans have much more predictable yearly costs for the employer. While defined contribution plans often have employer matches which generally require annual contributions, we reported that from 1995 to 2002, very few sponsors of large defined benefit plans were required to make cash contributions to their plans. In terms of sponsor contributions, neither type of plan— defined benefit or defined contribution—is inherently more or less expensive to a plan sponsor, nor more or less generous to plan participants, than the other. Expense and generosity depend on the particular provisions of the plan, among other factors. Costs associated with defined contribution plans are typically more straightforward for sponsors to calculate and project than defined benefit plan costs. Defined benefit plans require actuarial support and management of sponsor contributions and invested assets in order to fund liabilities. These considerations do not apply to sponsors of defined contribution plans. DOD contractors make two sets of calculations for each of their defined benefit pension plans, following two sets of standards. They calculate a CAS pension cost, which is the cost that serves as a basis for calculating what pension costs can be charged to contracts. This cost is allocated to contracts based on CAS rules. Like all plan sponsors, they also calculate the contribution they are required to make, or otherwise face penalties, under ERISA. Because the rules are different, the CAS pension cost is likely to be different from the required ERISA contribution. A contractor’s total pension cost may also include costs that are not allocated to DOD or other U.S. government contracts, but instead allocated to commercial activities. Several large DOD contractors have significant commercial operations. For example, less than 20 percent of United Technologies Corporation’s sales are to the U.S. government, and approximately half of Boeing’s sales come from its commercial aircraft business. The FAR requires that costs be allowable, allocable, and reasonable. When contract costs are established through negotiation, the CAS provides the framework contractors use to determine allocable costs. In particular, pension costs for DOD contracts are measured, assigned, and allocated to contracts according to rules in CAS 412 and 413. CAS rules are set by the CAS Board, part of the Office of Federal Procurement Policy within the Office of Management and Budget (OMB), which includes members from government and industry. CAS is designed to ensure uniformity across contractors in how they allocate costs on government contracts, linking the costs incurred on contracts to the benefits the government receives as a result of those costs. CAS also provides a framework for assigning costs to discrete cost periods and aims to minimize the volatility of pension costs in the pricing of government contracts. In addition to using CAS rules to measure pension costs incurred in a given year, contractors also use CAS rules to determine expected future pension costs, called “forward pricing projections.” Contractors use these projections when they negotiate contracts covering multiple years. These contracts may be firm fixed priced—with no adjustment to reflect actual costs under normal circumstances—or flexibly priced. Flexibly priced contracts provide for price adjustment. When a regulatory change occurs—such as a change in the CAS—both fixed and flexibly priced contracts may be eligible for adjustments (also known as equitable price adjustments) to reflect the impact of the change. CAS 412 provides guidance to contractors and the government on how to determine and measure the components of pension cost for defined benefit plans in a given year. For most defined benefit plans, the components include: 1. Normal cost: pension cost attributable to the employee’s work in the given year. 2. Other pension costs: the costs that include payment for part of any shortfall in assets required to pay for pension costs that are attributable to past service (the shortfall is known as the unfunded liability). Both of these components reflect actuarial present values, today, of benefits projected to be paid in the future, and not the actual benefits being paid today to plan participants. Sources of any shortfall may include: differences between actuarial assumptions and actual experience such as worse-than-expected asset performance in a given year (the difference is known as an actuarial loss); changes in actuarial assumptions that increase liabilities such as projections of inflation, mortality, and retirement age; and changes in the rules used for benefit computation or other plan amendments that increase liabilities. Plans with unfunded liabilities make installment payments to reduce these unfunded liabilities over a period of time that depends on the sources of the unfunded liabilities. If a plan has more assets than liabilities in a given year, then the normal cost is offset by extra plan assets, and so overall CAS cost to the government is reduced by the excess assets, and can even be reduced to zero. Plans might have more assets than liabilities if, for example, assets perform more strongly than expected (the difference is known as an actuarial gain) or if changes in actuarial assumptions reduce liabilities, or if the plan sponsor reduces liabilities through plan amendments. Both actuarial gains and losses are incorporated into CAS pension cost in installments over a number of years. Supported by in-house or external actuaries, DOD contractors calculate their CAS pension costs at least annually, and produce CAS valuation reports for plans. The calculations provide the basis for projections of future CAS pension costs for use in forward pricing. These costs are then allocated to the various divisions of the contractor. Contractors use a range of methods such as payroll dollars or number of active participants to allocate CAS pension costs across divisions for application to contracts. Allocation methods are explained in required CAS disclosure statements, prepared at the corporate and division levels by contractors, and provided to DOD for review. At the division level, the combined cost of pension benefits with other employee benefit costs including health benefits and group insurance, is frequently referred to as the fringe benefit cost. The fringe benefit cost is projected over one or more future years based on factors such as estimated labor costs and the expected amount of future business. Projected fringe benefit costs are then submitted to DCMA officials at the division level for review. While forward pricing projects future costs for use in contract pricing, contractors also develop annual proposals for incurred costs including CAS pension costs. These are actual costs incurred by the contractor, which may differ from the projected costs used in the forward pricing process. These proposals are submitted to DOD and used as the basis for negotiating settlement of any cost differences when closing out flexibly priced contracts. Congress enacted ERISA in 1974 to set certain protections for plan participants and minimum funding standards for pension plans sponsored by private employers. ERISA is designed to protect the interests of participants (and their beneficiaries). The administration of ERISA is divided among the Department of Labor, the Internal Revenue Service of the Department of the Treasury, and the Pension Benefit Guaranty Corporation (PBGC). According to PBGC, if sponsors are no longer able to fund or administer their plans, PBGC makes sure participants will get some or all of their promised benefits. The discount rate is a key part of determining both CAS pension costs and ERISA-required contributions. Pensions are promises to make a future stream of payments, and the discount rate determines the estimate of the present value of promises to pay a future benefit. As shown in figure 2, the higher the discount rate, the lower the liability today. Basic approaches to setting a plan’s discount rate include: (1) basing the discount rate on the expected long-term return on plan assets (which includes expected long-term stock market returns to the extent plan assets are so invested, and which, in recent years, often would produce discount rates between 7.0 and 8.0 percent), or (2) basing the discount rate on relevant interest rates in the bond market (which, in turn, could be based on either current market interest rates, or an historical average over some period, and which, in recent years, often would produce discount rates around 4.0 percent). The first approach will more often produce a higher discount rate than the latter approach. The Pension Protection Act of 2006 (PPA), effective 2008, changed ERISA, strengthening the minimum funding requirements for defined benefit plans (although its provisions were altered by subsequent legislation). These changes required sponsors to use a measure of corporate bond interest rates to calculate their defined benefit plan liabilities instead of a discount rate based on the expected long-term rate of return on plan assets, which generally increased contributions. In contrast, CAS rules continued to use the expected long-term rate of return assumption as the discount rate, which would typically be higher than corporate bond interest rates. PPA mandated certain changes to CAS pension rules while delaying implementation of the new ERISA funding rules for several large DOD contractors until 2011. The recent changes in the ERISA discount rate basis meant major differences in the methodology for CAS cost and ERISA contribution calculations, but CAS and ERISA rules were not fully aligned even before these changes. CAS pension cost rules were once similar to the rules for determining minimum ERISA contributions. However, as Congress amended ERISA over the years, the CAS Board did not make changes as frequently. For example, prior to PPA taking effect in 2008, ERISA rules imposed additional funding charges for underfunded plans, which were not accounted for by the CAS rules. Table 1 summarizes recent changes to discount rates used for CAS and ERISA calculations, including the most recent changes enacted in the 2012 Moving Ahead for Progress in the 21st Century Act (MAP-21). For a qualified pension cost to be recognized under CAS, a contractor must make a contribution in a given year to a plan’s trust fund. However, the past divergence of CAS and ERISA approaches is a driver of contractors contributing more to their pension plans than has been recognized under CAS and reflected in contract prices. This has generated CAS prepayment credits. In the future, the contractor can apply its CAS prepayment credits in lieu of a cash contribution to the plan in a given year. The largest DOD contractors had at least $26.5 billion in CAS prepayment credits as of the beginning of 2011. Figure 3 provides a hypothetical example of how a prepayment credit is generated and discharged. Prepayment credits affect how contractors calculate their unfunded liabilities. When comparing assets to liabilities, prepayment credits are subtracted from CAS assets. This creates a higher unfunded liability and thus a higher CAS cost. PPA required the CAS Board to harmonize CAS to ERISA by January 1, 2010. The changes made by the CAS Board became effective in February 2012. However, the CAS Board did not make CAS rules exactly match ERISA, stating that this was not congressional intent, and recognizing that the two different systems have different goals. The CAS Board’s final rule phased in the liability calculated with the ERISA-based This discount rate—from 25 percent in 2014 to 100 percent in 2017.means that closer alignment between CAS pension costs and ERISA contributions will take several years. In addition to changes to the discount rate, the CAS Board also reduced the schedule of time to pay for actuarial losses (or get credit for actuarial gains) from 15 to 10 years, starting in 2013. This change aligns the CAS amortization schedule more closely with the 7-year amortization schedule mandated in the PPA. The CAS Board also designed harmonization so that if the cost calculation is lower under the new rules than the traditional rules, then the traditional rules would continue to apply. The most recent change to ERISA minimum contribution requirements can also affect CAS pension cost. Congress effectively increased, on a temporary basis, the ERISA-mandated discount rate by applying a new methodology for calculating it via the MAP-21 legislation. Because the CAS harmonization rules say the ERISA discount rate is an automatically acceptable (“safe harbor”) rate, contractors that use the ERISA discount rate will see a matching change in their CAS discount rate. CAS rules (CAS 413) specify that the contractor and the government must “settle up” under certain circumstances. For example, a settlement would be triggered if a contractor curtails a plan, meaning that no new benefits can accrue for participants. This means that DOD and the contractors calculate whether the government has over- or underpaid for CAS pension costs over the years, with the balance being settled via payment by the government or the contractor. These CAS settlement rules use the traditional long-term rate of return discount rate, instead of the ERISA-based corporate bond interest rates. This means that the CAS liability for settling up would be similar to the old CAS liability and would not reflect changes from the harmonization rule. Like contractors, DOD centralizes its technical expertise for management and oversight of defined benefit pension plans. DOD negotiates CAS pension costs with contractors at the corporate level. Figure 4 illustrates the range of interactions and information flows between large DOD contractors and those parts of DOD involved in pension cost oversight. DOD oversight of CAS pension costs parallels the central management of these costs by the largest DOD contractors at the corporate level. The corporate-level DCMA contracting officer receives contractor submissions such as pension forward pricing and incurred pension cost proposals. The corporate-level contracting officer negotiates CAS pension costs and either comes to agreement with the contractor or recommends an amount of CAS pension cost that DCMA contracting officers at the division level can use in negotiations. To monitor possible cost changes during contract performance, DCMA requires establishment of cost monitoring programs at contractor locations that meet certain government contracting cost and sales criteria. As contractors bill the government after contracts are awarded, DCMA cost monitors at the corporate level compare proposed costs with actual costs incurred. If actual costs diverge from the proposed costs, the cost monitor may recommend that an agreement be modified or even cancelled. This can affect the cost of fixed-price contracts awarded in the future and existing flexibly priced contracts. The corporate-level contracting officer has two primary sources of technical expertise available to assist in determining that the contractor’s CAS pension costs meet CAS and FAR requirements that they be allowable, allocable, reasonable, and compliant: the DCMA CIPR Center and auditors from DCAA. The corporate-level contracting officer can use information from these two sources in negotiations with the contractor that result in either pension forward pricing agreements or recommendations. The CIPR Center represents a key element in DOD’s oversight process, giving recognition to the complexity and highly technical nature of defined benefit pension plans. As DOD’s centralized source of actuarial expertise, it advises DCMA contracting officers on pensions as well as insurance, including review of forward pricing proposals. The CIPR Center assesses the reasonableness of actuarial assumptions, including the discount rates used to calculate liabilities. It also provides an independent measurement for projected pension costs. To test a contractor’s estimate of pension costs for future years, the CIPR Center has developed a model that generates an independent projection of the contractor’s CAS pension costs, which according to a CIPR Center official, is based on data and actuarial assumptions in CAS valuation reports. The CIPR Center compares the model’s output with the contractor’s proposal to evaluate whether the contractor’s projections are reasonable, and then issues a report that includes the CIPR Center’s methodology, calculations, and evaluation of reasonableness. For example, the CIPR Center has issued a recent report noting that a contractor’s assumed rates of return used to project CAS pension costs were unreasonable. The CIPR Center is more likely to review proposed CAS pension costs annually for large contractors than for smaller contractors that also have significant defined benefit plans. From 2007 to 2011, the CIPR Center reviewed proposed pension costs for six of the nine largest contractors with defined benefit plans at least annually. Two other large contractors were reviewed in 4 of the 5 years from 2007 to 2011. The ninth large contractor, with relatively low CAS pension costs, had not been reviewed within the last 5 years. Four of the seven smaller contractors included in our review received CIPR Center reviews of proposed pension costs at least once between 2007 and 2011. Corporate-level contracting officers have the discretion to determine if the potential risk associated with CAS pension costs merits specialized review by the CIPR Center. One contracting officer at a smaller contractor noted that over recent years he had requested and received regular CIPR Center reviews of the contractor’s projected pension costs. Another contracting officer at a contractor whose pension plans have not received a recent CIPR Center review noted that he relied primarily on DCAA audits for insight into CAS pension cost issues, unless there had been significant pension plan changes such as a curtailment of benefits. DCAA auditors at the contractor’s corporate office are responsible for reviewing other aspects of proposed pension forward pricing, such as previous CAS pension cost estimates to assess how close they were to actual CAS pension costs for those periods. DCAA employs technical specialists who provide auditors with additional support on pension issues. DCAA audits may question costs that they identify as not allowable, allocable or reasonable, which the contracting officer may incorporate into negotiations with the contractor. For example, DCAA audits have questioned costs in forward pricing proposals because estimated CAS pension costs were higher than the contractor’s historical cost trends or the calculation methods were not compliant with CAS. Corporate-level contracting officers rely primarily on DCAA, and to a lesser extent the CIPR Center, to review contractors’ annual proposals representing actual corporate-managed costs incurred in the previous year, including CAS pension costs. DCAA audits incurred CAS pension costs reported by the contractor to determine whether they are allowable, allocable, and reasonable, as well as compliant with CAS. According to a CIPR Center official, contracting officers may also request additional support from the CIPR Center to ensure information in the incurred cost proposal reflects what is in the corresponding CAS valuation reports. They usually respond to these requests for support in a less formal manner than is the case with proposed forward-pricing requests, generally not issuing detailed reports. As with the forward-pricing process, the contracting officer may use the information from DCAA audits and CIPR Center reviews, including any questioned costs, when negotiating final indirect costs with the contractor. Once established, these costs are allocated to the divisions to form the basis of adjustments to flexibly priced contracts that can then be closed out. Paralleling the contractor’s process, DCMA officials at the division monitor the incorporation of allocated pension costs into fringe benefit costs. Fringe benefit costs can also include defined contribution plan costs. Contractor and DOD officials we spoke with noted that it could be challenging to fully determine CAS pension costs applied to or incurred on a specific contract. For example, some CAS pension costs are captured among other indirect costs (such as shared service or corporate office costs). DCAA is also responsible for reviewing the adequacy of contractors’ CAS disclosure statements at the corporate and division levels and determining their compliance with CAS and FAR. These statements contain information regarding how costs are allocated, and the corporate- level disclosure statement in particular contains many details about allocation of the contractor’s pension plans. In addition to overseeing CAS pension costs through the forward pricing and incurred cost processes, corporate-level contracting officers manage the process required by CAS for pension cost settlement when a contractor curtails a defined benefit pension plan. Curtailment under CAS means any situation where no new benefits can accrue for plan participants. When such a curtailment occurs, corporate-level contracting officers can receive assistance from the CIPR Center and DCAA to ensure that the related proposals submitted by the contractor are compliant with CAS. When a contractor initiates a curtailment, it calculates the affected plan’s CAS pension costs to determine whether the plan is under- or overfunded and whether the government has over- or underpaid for CAS pension costs over the years. Based on the result of the calculation, one party may owe the other the balance of the difference in order to “settle up” the plan. The contractor submits a proposed settlement to the contracting officer, and the CIPR Center and DCAA provide support by reviewing the proposal in order to evaluate whether the calculations are correct and compliant with CAS. Both can issue reports which will help the contracting officer to negotiate a final settlement with the contractor. The resulting payment, whether from the contractor or the government, may either be immediately charged or, when the contractor has other government contracts, amortized as pension costs over future years. Settlements that resulted in potential payments to the contractor have resulted in litigation and long delays. According to DOD officials, three of the largest DOD contractors have pending settlements. Two of the smaller DOD contractors included in our review have settled cases within the last 4 years that resulted in payments to the government. DOD officials we met with noted that part of the reason for delayed settlements is the complicated nature of determining the appropriate government share of CAS pension costs, given that CAS rules on allocation of pension costs to contracts have changed over time. In response to court cases on the matter, DCAA and DCMA have issued joint guidance to address related issues. The FAR requires that total employee compensation, which includes many components such as salaries and bonuses, fringe benefits like retirement benefits and health insurance, and other nonwage compensation, must be reasonable in order to be claimed by the contractor as a contract cost. However, as part of assessing the reasonableness of total compensation, DOD’s oversight processes do not clearly assign responsibility for assessing the reasonableness of the value of pension benefits to plan participants, focusing instead on the reasonableness of actuarial assumptions or fringe benefits as a whole. Fringe benefits are examined as part of compensation reviews that DCAA auditors perform to determine reasonableness, often as part of incurred cost audits or reviews of compensation system internal controls. DCAA guidance for compensation reviews states that all cost components of employee compensation—including the value of fringe benefits, bonuses, and stock options as well as salary—are considered to be reasonable if they do not exceed the comparative value of those costs from market Defined benefit pensions are survey data by more than 10 percent.generally part of that fringe benefit cost component, along with other benefits such as health and life insurance. Only if these collectively exceed the reasonableness threshold is an auditor instructed to review the individual cost components, such as pensions. In instances where questions arise about the reasonableness of pension costs, the auditor is instructed to turn to the CIPR Center as a resource for pension-related matters. Several auditors and DCMA contracting officers we spoke with also noted that if they had questions regarding the reasonableness of defined benefit plans, they would seek assistance from groups such as the CIPR Center or a centralized DCAA team that specializes in compensation issues, particularly those related to executive compensation. Auditors are instructed to review fringe benefit costs as a whole when determining their reasonableness, but CAS costs for defined benefit pensions are an imperfect measure of the value of pension benefits participants earned in a year as part of their total compensation. Multiple factors drive CAS pension costs. For example, the pension cost could be zero in a given year due to strong asset returns, and this pension cost would not capture any of the value of the benefits earned that year by employees. Conversely, the pension cost could be higher in a given year than the value of the benefits earned that year by employees as a result of actuarial losses. While they may be aware of the CAS costs of defined benefit pensions, auditors do not know the value of these benefits to an employee in a given year. They lack guidance on how to measure this value (containing, for example, acceptable methodologies, assumptions, or data sources), and therefore are unable to get a complete picture of the reasonableness of total compensation for contractor employees. Neither the CIPR Center nor DCAA’s compensation team currently assess the reasonableness of benefits offered through defined benefit plans. While officials stated that the CIPR Center did perform reviews of employee benefit offerings more than a decade ago, to the extent that the CIPR Center does evaluate reasonableness today, it does so only in terms of the measurements and actuarial assumptions used by contractors to calculate their CAS pension costs. It does not consider the relative value of benefits offered. For non-executive employees, the DCAA compensation team only reviews the reasonableness of salaries for direct labor. In essence, DOD assesses whether the CAS cost is appropriate from a regulatory and actuarial standpoint. Whether the liability reflected in the CAS cost stems from a generous pension plan is not considered. GAO reviewed the most prevalent final average pay formulas among the contractors that have these plans and found that contractors offer a wide range of benefit formulas and plan designs. This means that employees’ defined benefits can differ greatly from contractor to contractor. Plans offered by contractors include final average pay plans, which use a formula that considers a participant’s final average pay and years of service, as well as cash balance plans that use a hypothetical individual account to calculate benefits based on a percentage of a participant’s pay and a plan-specified rate of interest to be applied to a participant’s hypothetical account. The final average pay plans generally had a “base” accrual rate that granted between 1 percent and 2 percent of final average pay for each year of service with the company. two employees may have the same final average pay of $50,000 and the same 30 years of service. However, the employee with the “base” accrual rate of 2 percent would have an annual base benefit of $30,000 in retirement, whereas the employee with the “base” accrual rate of 1 percent would have an annual base benefit of $15,000. In addition, these plans had a variety of features which affect a participant’s retirement benefit. For example, some plan formulas have the effect of reducing the base benefit by taking into account Social Security benefits to be received in the future. We noted other plan features, such as the presence or absence of a cost of living adjustment, which annually increases the benefit in retirement by a measure of inflation. Thus there was wide variation of plan designs across contractors and in the potential value of benefits to participants in different plans. However, neither DCAA corporate-level officials, the CIPR Center, nor the DCAA compensation team assessed the reasonableness of individual plans. “Base” accrual rate refers to the fact that the accrual rate may be different for certain years of service; we use the term base to refer to the earliest years of service. Actual benefits could be reduced by taking into account Social Security benefits as well as for early retirement. DCAA is responsible for reviewing executive compensation packages separately from compensation offered to other employees in order to evaluate whether these packages meet the FAR standard for reasonableness and do not exceed the dollar limitation specified in the FAR. However, defined benefit pension plans for contractor executives are not required to be included in these assessments. Executive compensation reviews are usually done as part of incurred cost audits, although they can also be performed on audits of forward pricing proposals. DCAA auditors at contractor corporate offices have access to the DCAA compensation team for assistance with such reviews. While this team has developed a methodology for determining executive compensation reasonableness, it does not require examination of defined benefit pensions in the determinations, similar to its approach to pension plans in general. Compensation team officials told us they analyze the total cost of fringe benefits, and only look at individual benefits such as pensions if they deem the total fringe benefit cost to exceed that indicated by market survey data by more than 10 percent. In addition, the defined benefit components of the market surveys used by the team do not specify the use of CAS for their calculations, and thus may not be directly comparable to CAS-based pension cost. Compensation team officials noted that the most recent survey they use for this purpose was issued in 2008, and only included self-reported pension cost. Executive compensation reviews we analyzed that addressed the reasonableness of total compensation and fringe benefits did not discuss the details of defined benefit pension plans. To the extent that the compensation team does look specifically at defined benefit pensions, team officials told us that they evaluate the relative CAS cost of the pension. They do not examine the source of this cost, and therefore cannot identify whether, for example, a high relative CAS pension cost was largely driven by the generosity of pension plans or weak asset performance. The FAR also contains a dollar limitation on the allowable annual compensation for certain contractor personnel, currently set at $763,029. The FAR describes the elements of executive compensation that should be considered against this limit. These include salary, bonuses, deferred compensation other than pensions, and employer contributions to defined contribution pension plans. However, the FAR does not include defined benefit pension plans as an element of compensation that should be considered against the limit. Accurately applying the cost of a defined benefit pension to an individual employee’s total compensation package is challenging due to the complexity and annual volatility of costs even if the value of the ultimate benefit does not change. DCAA compensation team officials noted that it is not clear how costs of a defined benefit plan should be evaluated. In addition, they lack current market survey data for defined benefit plans, and team officials noted that companies participating in these surveys do not consistently calculate and report their compensation costs. Nearly all of the largest DOD contractors—as well as the peer group of companies we examined—maintain some sort of tax-qualified, defined benefit pension plan for their employees. As noted previously, the benefit designs of these plans can differ greatly, and we found variations among certain contractors’ final average pay plans. However, we were unable to compare the full range of plan designs across both contractors and their peer group. More generally, all of the largest contractors with defined benefit plans— and the majority of their peer group—have frozen at least one of their plans in some way. A plan freeze is a plan amendment that closes the plan to new entrants and may limit future benefit accruals for some or all employees that are active in the plan. Under a freeze, the plan continues to be maintained by the sponsor. Specifically, a majority of the largest contractors and their peer group have “soft frozen” plans, that is, closed at least one of their plans to new entrants, while allowing existing participants to continue to accrue benefits.percent of the contractors’ largest defined benefit plans were frozen in some way. Some DOD contractors reported that when they froze their defined benefit plans they had either established a new defined contribution plan or changed the terms of an existing defined contribution plan for those employees who were no longer eligible to accrue benefits in a defined benefit plan. For example, one DOD contractor noted that employees not eligible for a defined benefit plan may receive a matching contribution under a defined contribution plan, whereas employees who are eligible for a defined benefit plan would not be eligible for such a match. In the short term, transitioning new employees to defined contribution plans may raise total costs since defined benefit plans generally are least expensive for young and new participants. Defined benefit plans that remained open to new participants often included collectively bargained participants, and all but one of the largest DOD contractors had at least one plan that remained open to new participants. Open plans with collectively bargained participants were generally among the contractors’ smaller plans. Further, some DOD contractors said that they intended to close all of their defined benefit plans to new entrants and, subject to negotiation, they also expected plans with collectively bargained participants to be closed to new entrants in the future. For example, one DOD contractor noted that a number of its open plans were already “partially frozen,” or open only for certain bargaining units, while some, but not all, bargaining units had agreed to close the plan to new entrants going forward. Generally, the number of private-sector companies sponsoring defined benefit plans has declined substantially over the last 20 years or so. A prior GAO survey of 94 of the largest firms sponsoring defined benefit plans showed that many firms made revisions to their plan offerings over that last 10 years. For example, large sponsors have changed benefit formulas, converted to hybrid plans, or frozen some defined benefit plans. Moreover, in another GAO survey among a broader population of sponsors that included all plan sponsors with 100 or more total participants, 51 percent of those sponsors had one or more frozen defined benefit plans. A 2011 Aon Hewitt study of Fortune 500 companies found largely similar results over time. For example, the study noted that 80 percent of Fortune 500 companies sponsored an open, defined benefit plan for salaried employees in 1995. However, as of 2011, only 31 percent sponsored an open, defined benefit plan. Other large DOD contractors had hard frozen smaller plans but not plans that were among the contractors’ largest plans (i.e., those that together covered at least 90 percent of each contractor’s pension liabilities). DOD contractors reported that these plans were “legacy” plans which had been replaced by another plan, or plans that were hard frozen prior to the contractor’s acquisition of the business divisions with those plans. the contractor or vice versa.of one large contractor, one of the contractor’s plans that was settled in the late 1990s was determined to be overfunded on a CAS basis, but underfunded on an ERISA basis. This meant that the contractor owed the government money for settlement despite the fact that the plan was underfunded on an ongoing ERISA basis. For example, according to a representative However, the settlement-related challenges may not be the sole reason that a DOD contractor would avoid instituting a hard freeze. Indeed, one DOD contractor noted that instituting a hard freeze could damage employee relations and that in general with employees it is easier to justify closure of plans to new entrants. Other DOD contractors told us they continually evaluate their pension offerings against those of peers, and the competitiveness of their plans compared to those of peers is a driver of pension management decisions. A few DOD contractors noted that they want to provide pension plans that allow them to attract skilled employees, while remaining cost-competitive. Nearly all of the largest DOD contractors and their peer group offer nonqualified defined benefit plans in addition to their tax-qualified defined benefit plans. In fact, all but one DOD contractor and one peer we reviewed maintained at least one nonqualified defined benefit plan. While the provisions of each nonqualified plan vary, in general, the most prevalent type that we found were “restoration” (or “excess benefit”) plans. These are plans that typically extend the benefits provided by a tax-qualified defined benefit plan by supplementing the portion of benefits that are in excess of limits prescribed by the Internal Revenue Code.For example, one contractor noted that its restoration plans could include certain highly-paid engineers. Some types of nonqualified plans we reviewed appeared to be offered only to certain senior executives. Our review of the financial reports of the largest DOD contractors and their peer group shows that the DOD contractors invest in similar types of assets relative to their peer group. However, DOD contractors and their peer group employed a wide range of pension investment allocations between equities and fixed-income assets. For example, DOD contractors allocated as much as 64 percent or as little as 26 percent of pension investments to equity assets (i.e., stocks), while their peer group allocated as much as 74 percent or as little as 26 percent of pension investments to such assets. Similarly, DOD contractors allocated as much as 46 percent or as little as 32 percent of pension investments to fixed- income assets (i.e., bonds), while their peer group companies allocated as much as 51 percent or as little as 25 percent of pension investments to such assets. The DOD contractors’ pension investment allocations appear to be somewhat more conservative than those of their peer group when analyzed in the aggregate. Aggregating the year-end 2011 pension investment allocations of the DOD contractors and their peer group shows that contractors have allocated about 7 percentage points more of their investments to generally conservative assets, namely cash and fixed-income assets, than is the case with their peer group, as illustrated in figure 5. This means that, in the aggregate, the DOD contractors have a lower percentage of pension investments allocated to equities and “other” assets compared to their peer group. Equities and “other” assets, such as private equity, hedge funds, real estate, and commodities, are generally considered to be riskier than cash and fixed-income assets. CAS pension costs for the largest DOD contractors grew considerably over the last decade. Costs went from less than $420 million dollars in 2002 (when most contractors reported at least one plan with zero costs, after a period when some plans were fully funded) to almost $5 billion dollars in 2011. While growth in total CAS pension costs was relatively small and gradual until 2008, as shown in figure 6, costs jumped by almost $1.5 billion from 2008 to 2009. They increased almost 90 percent in nominal dollars from 2008 to 2011, a substantial share of which was allocable to DOD contracts. CAS pension costs are likely spread over thousands of contracts. All five weapon systems we analyzed showed an increase in defined benefit pension cost relative to labor cost from 2005 to 2011, as illustrated in figure 7. For the five weapon systems programs, CAS pension costs as a percentage of direct labor showed the most growth from 2008 to 2009, corresponding to trends seen in aggregate costs across the largest DOD contractors. As these costs increased, contractors took several actions to control them. For example, as previously discussed, contractors were closing a number of defined benefit plans to new entrants and several adjusted benefit formulas. CAS pension costs have also grown relative to total contract cost for the selected weapon systems programs. As shown in figure 8, average pension costs never exceeded 3 percent in any year—although this is still a significant dollar amount on large weapon systems contracts. Until 2009, average pension costs never exceeded 1 percent. However, note that this figure understates the impact of pension costs on programs since material costs—including the complex subsystems and components bought from subcontractors—may also include pension costs. Material costs for the systems we reviewed were as much as 81 percent of total program costs. Across this period, the trend for defined contribution plans differed. Defined contribution costs as a percentage of direct labor on the selected programs grew only slightly, and remained much steadier than the CAS pension costs for defined benefit plans. In 2005, defined contribution costs ranged between 0 and 6.9 percent for the five programs we examined. In 2011, the range was 0.6 percent to 7.0 percent. Defined contribution plan costs will generally be higher than defined benefit plan costs when defined benefit plan assets perform well, and gains offset a plan’s normal cost. Defined benefit plans will likely cost more than defined contribution plans when assets perform poorly, as the employer bears the investment risk. As demonstrated, defined contribution plan costs are generally more stable than defined benefit plan costs. On a CAS basis (excluding prepayment credits), contractors’ plan assets at the beginning of 2011 were approximately $15.1 billion less than would be needed to pay their pension liabilities. This gap, known as the unfunded liability, is largely a result of losses incurred during the market downturn in 2008 and 2009.of this unfunded liability is attributed to losses from just those 2 years. The remainder of the unfunded liability came from other sources, such as changes in the contractors’ actuarial assumptions, other investment losses, and plan amendments (e.g., changes in rules for benefits computation). Both contractors and DOD officials expect CAS pension costs to increase as discount rates used for CAS calculations fall to match the rates used for ERISA funding calculations. Indeed, in their 5-year pension cost forward pricing projections issued immediately following harmonization, large DOD contractors had estimated that CAS discount rates would fall by between 2.2 and 4.1 percentage points in 2014, depending on the demographics of the plan. This drop would, in turn, increase costs because decreases in the discount rate raise pension liabilities and the normal cost. Increases in unfunded liabilities also increase CAS pension costs because of the need to pay down those unfunded liabilities in installments. Harmonization ties the CAS discount rate to ERISA rules, making it harder to project future CAS pension costs. On July 6, 2012, a few months after harmonization went into effect, Congress enacted MAP-21, which changed the methodology for calculating ERISA discount rates.Before MAP-21, ERISA discount rates were based on a 2-year average of corporate bond interest rates. Now, this 2-year average is bounded by a 25-year average of corporate bond interest rates, and as a result, contractors now project their CAS discount rates will drop only 1.5 to 3.1 percentage points, starting in 2014, to harmonize with ERISA.the effects of MAP-21’s ERISA funding relief are expected to have the greatest impact in the near term, and to diminish after 2015. Therefore, contractors still expect their CAS discount rates to be 2.0 to 4.0 percentage points lower in 2016 than their pre-harmonization 2012 CAS discount rates. Costs under the new, harmonized CAS pension rules can vary dramatically based on small changes in the corporate bond interest rates used to discount liabilities. We modeled an illustrative pension plan’s CAS pension costs from 2014 to 2017, the period over which the new CAS discount rate rules will be phased-in. In our model, a 1.0 percentage point decrease in the discount rate (as determined by a measure of corporate bond interest rates) could increase CAS pension costs by 35 percent once the rule is fully implemented, and a 2.0 percentage point decrease could almost double CAS pension costs, as shown in figure 11. Furthermore, changes in this rate can have a greater effect on CAS pension costs than similar changes in plan asset returns. Under certain scenarios, CAS pension costs could begin to decline back to previous levels over the next decade, but the outcome is sensitive to what actually happens in the economy. For example, as shown in figure 12, projected CAS pension costs would begin to decline by the end of the decade and approach what they would have been under pre- harmonization CAS rules if discount rates rise to 6.5 percent by 2017 and stabilize at that level. However, if corporate bond interest rates do not stabilize and instead started to fall again after 2019, the discount rate would fall as well and CAS pension costs would then continue to rise. This example does not account for any asset gains or losses which could further raise or lower CAS pension costs. As noted earlier, after harmonization went into effect in February 2012, the largest DOD contractors submitted new pension forward pricing proposals to DOD and projected significant rises in CAS pension costs by 2016. Overall, these updated projections showed large increases in CAS pension costs when compared to the pre-harmonization projections for the 2012 to 2016 period. Most contractors’ projections for 2012 and 2013 showed little or no change but all contractors projected increases from After excluding the 2014 through 2016 as harmonization takes effect.impact of changes such as changes to plan benefits to isolate the effects of harmonization, these CAS pension cost increases for individual contractors ranged from 10 percent to 55 percent for 2014, relative to their proposals that do not reflect the impact of harmonization. All of the large DOD contractors that submitted an updated pension forward pricing proposal after the enactment of MAP-21 still showed an increase in projected CAS pension costs, despite the temporary relief from ERISA funding requirements provided by the law. While MAP-21 dampened the initial projected effect of harmonization, a few large DOD contractors noted that the impact of MAP-21 is likely to be temporary and that its long-term effect on discount rates and future CAS pension costs remain unknown. After taking into account MAP-21, projected CAS pension cost increases for individual contractors ranged from 7 percent to 37 percent for 2014, due solely to harmonization, relative to their proposals that do not reflect the impact of harmonization. In aggregate, that represents a projected increase for 2014 of nearly $1.2 billion across the six contractors that submitted forward pricing proposals reflecting MAP-21. By contrast, the increase projected by those contractors in proposals prior to MAP-21 was almost $2 billion. CAS pension costs for defined benefit plans at the divisions we reviewed are expected to rise as a percentage of direct labor costs. At all five divisions, post-harmonization projections that were the basis of negotiations for most of 2012 showed a rise in CAS defined benefit pension costs as a percentage of projected direct labor costs of between 8 and 21 percentage points from 2012 to 2016. For those divisions, defined contribution costs stayed largely stable across the same period. CAS rules are intended to provide consistent cost data for forward pricing of government contracts over future years for contracts implemented over multiple years. However, harmonization tied CAS discount rates to the more volatile ERISA-based discount rate, which can make CAS less consistent as a standard for generating pricing projections. DOD issued limited guidance to its acquisition organizations in March 2012 on projecting ERISA-based discount rates for CAS calculations, which indicates that contractors should increase their current ERISA-based rates for forward pricing to approach a 4- to 6-year historical average rate. The guidance is not clear on the source for these rates or how quickly they should rise to historical averages. This lack of clarity can lead to great variation among the forward pricing rates of contractors, even if they have similar participant demographics, because small changes in the projected discount rate can create large changes in projected CAS pension cost. Additionally, DOD indicated that in its final guidance, yet to be issued, forward discount rates would approach average rates drawn from 15 to 20 years of historical data. Rates based on long-term averages would ensure more consistency in pricing because these rates would change less year-to-year than rates based on short-term historical averages. In the near-term, rates for forward pricing based on a long-term historical average would also very likely increase the contractors’ discount rates, reducing CAS pension costs. This final guidance may provide greater clarity about discount rates contractors could use to calculate pension costs for forward pricing purposes. However, in the absence of this guidance, there is likely to be a broad range of discount rates in use and thus large variation in forward pricing rates, even if contractors have similar participant demographics. Since harmonization was a mandatory regulatory change, contractors can ask for a contract adjustment to reflect the cost impact of the change. Although a general procedure exists that contractors can follow to seek any kind of adjustment, a March 2012 DOD memorandum stated that contracts would be eligible for adjustment if they were signed prior to February 27, 2012 and if their period of performance continues into 2014 or later, when use of the ERISA-based discount rate begins to phase in. The memorandum indicated that DOD would eventually release more guidance on the matter but did not specify a timeline for completing the negotiation of contract adjustments. As of November 2012, DOD had not yet issued additional guidance. The amount of additional CAS pension cost for DOD due to harmonization adjustments will depend, in part, on the number of contracts submitted by contractors for consideration, and this is yet to be determined. Some contractors said that a number of their contracts may be complete or no longer incurring costs by the time harmonization noticeably increases CAS pension costs. We reviewed four programs that have production or construction contracts that were both awarded before February 2012, and for which deliveries are scheduled in 2014 or beyond. These include large platforms with small quantities, such as Wideband Global SATCOM and Virginia Class Submarine; because satellites and submarines take several years to build, all the units on these contracts will be delivered in 2014 or later (as late as 2018 for the submarine). Therefore, substantial costs could be incurred in 2014 and beyond. In contrast, the bulk of costs on relevant contracts for F-35 Joint Strike Fighter aircraft and Tactical Tomahawk missiles will be incurred before 2014 when the main provisions of harmonization take effect. Over 80 percent of F-35 deliveries, and almost 60 percent of Tactical Tomahawk deliveries, are scheduled to take place before 2014. Several contractors stated that they were waiting for DOD to issue additional guidance before submitting their requests for adjustment and one contractor commented that it could be beneficial for DOD to wait for interest rates to rise, as that could negate the effect of harmonization and the need for adjustment on some contracts. The CAS Board did not harmonize the discount rates used for settling up if a contractor curtails a pension plan, meaning that liabilities could be calculated differently under ERISA and CAS rules if a contractor terminates a plan or freezes new benefit accruals for all participants. In such an event, the liability would be calculated using the old (likely higher) assumed long-term rate of return, instead of the new (likely lower) corporate bond interest rates. In the current environment, that would make the measurement of liabilities lower for a plan being curtailed than would be the case if the plan continued with new benefits accruing. According to CAS Board officials, the Board intends to begin a case on CAS 413 in the near future, although a schedule for such rule-making has not been created. The process of changing CAS rules can be time- consuming. For example, while PPA, enacted in 2006, established a deadline for harmonization of January 1, 2010, the final ruling was not issued until December 2011, and not effective until February 2012. DOD faces new challenges as a result of changes to rules governing contractor pension costs and the growth in these costs, especially since the market downturn that started in 2008. The regulatory structure for government contracting generally allows contractors to receive payment for normal business costs incurred while working on government contracts, including employees’ salaries and benefits such as pensions. DOD recognizes that understanding and overseeing pension costs requires highly specialized expertise, and has therefore centralized its pension oversight functions. However, while DOD processes ensure that contractors’ CAS pension costs have been calculated correctly and that actuarial assumptions are reasonable, these processes do not assign responsibility for reviewing and valuing the benefits that participants will receive. Additionally, CAS pension cost is an imperfect measure of the value of pension benefits participants earned in a given year. As a result, DOD has an incomplete picture of the reasonableness of the total compensation offered by contractors. Further, DOD’s assessment of executive compensation does not require inclusion of defined benefit pensions, and the assessment that does take place does not consider the value of benefits earned by participants. This could hamper DOD’s efforts to ensure the reasonableness of the total compensation offered to contractor executives. CAS pension costs associated with defined benefit plans have grown substantially over the past decade, and can be expected to grow larger and more volatile with the harmonization of CAS to ERISA. We found that in this environment, DOD contractors, like their peer group, have limited employee entry to defined benefit plans. Defined benefit pension costs are highly sensitive to economic assumptions, and even a small change in conditions can have significant consequences. Increased volatility due to harmonization challenges the consistency of contract forward pricing. Under the previous rules, CAS discount rates were more stable and predictable, and therefore effective for consistent forward pricing. DOD has recognized the desirability of using long-term average rates in CAS calculations in order to smooth the impact of pension cost swings over time, and the need to provide more guidance to its acquisition organizations on the discount rates contractors should use. While DOD has stated that this guidance would be forthcoming, details are yet to emerge, and the longer it takes to issue the guidance, the longer DOD is likely to see a broad range of discount rates and large variation in forward pricing rates. Further, while harmonization changed how contractors will calculate their CAS pension costs, it did not update CAS 413 to harmonize the discount rates used for settling up in the event of a plan curtailment. The current interest rate environment means that a plan being curtailed would have significantly lower liabilities than if it had continued accruing new benefits, complicating settlements between contractors and the government. We recommend that the Secretary of Defense take the following four actions: Assign responsibility for oversight of the reasonableness of pension plans offered by contractors, specifically the value of benefits earned by participants; Provide guidance on how to measure the value of pension benefits that participants earn in a given year to get a complete picture of total compensation for contractor employees; Provide guidance on the extent to which defined benefit plans should be included in assessments of the reasonableness of executive compensation packages; and Provide specific guidance to acquisition organizations, including DCMA and DCAA, on the discount rate or rates that would be acceptable for contractors to use in calculating pension costs for forward pricing purposes. In order to better align with the harmonized CAS 412, we recommend that the CAS Board set a schedule for revising the parts of CAS 413 dealing with settlement of pension plan curtailments. We provided a draft of this report to DOD, OMB, PBGC, the Department of the Treasury, and the 10 large DOD contractors covered by our review. We received formal written comments from DOD. DOD agreed with all four recommendations made to the Secretary of Defense. DOD also provided technical comments which were incorporated as appropriate. DOD comments are reproduced in appendix II. OMB provided comments stating that the CAS Board, when it meets, will consider a schedule for a case to revise the parts of CAS 412 and CAS 413 relating to defined benefit plan segment closings and curtailments. OMB also offered technical comments which were incorporated as appropriate. We received comments from six contractors, who said that the report captures the complexities involved in determining pension costs. Four contractors indicated that they had no comments. Contractors also offered technical comments which were incorporated as appropriate. Both the Department of the Treasury and PBGC provided technical comments which were incorporated as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Acting Director, Office of Management and Budget; and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact Cristina Chaplain at (202) 512-4841 or chaplainc@gao.gov, or Charles Jeszeck at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are found in appendix III. Our objectives were to assess how (1) contractor pension costs are determined; (2) the Department of Defense (DOD) ensures the contractor pension costs it pays are appropriate; (3) DOD contractors’ defined benefit pension plans compare with plans sponsored by similar companies that are not among the largest DOD contractors; (4) pension costs have affected DOD contract costs and the factors that contributed to these pension costs; and (5) the December 2011 harmonization of Cost Accounting Standards (CAS) with the Employee Retirement Income Security Act of 1974 (ERISA) will affect the amounts DOD will pay in pension costs in coming years. To conduct analysis across all objectives, we analyzed defined benefit pension plans for the 10 largest contractors based on DOD contract obligations for fiscal year 2011. Those contractors were: BAE Systems plc The Boeing Company General Dynamics Corporation L-3 Communications Holdings, Inc. Lockheed Martin Corporation Northrop Grumman Corporation Oshkosh Corporation Raytheon Company SAIC, Inc. United Technologies Corporation For these contractors (with the exception of one that does not offer defined benefit plans), we selected for deeper analysis defined benefit plans that together covered at least 90 percent of each contractor’s pension liabilities (35 plans in total). At the 10 largest contractors, we interviewed officials whose responsibilities included benefits management and government accounting, as well as a number of actuaries supporting those contractors. We also interviewed Defense Contract Management Agency (DCMA) and Defense Contract Audit Agency (DCAA) officials with responsibilities covering contractor costs at headquarters, and at a number of specialized centers such as the DCMA Contractor Insurance/Pension Review (CIPR) Center and Contract Disputes Resolution Center, and the DCAA Compensation Team. We also interviewed DOD officials with cognizance for negotiation and oversight of pension costs at the corporate level for each of the 10 selected large contractors, including the DCMA Corporate Administrative Contracting Officer (CACO), and DCAA officials including regional audit managers, resident auditors, and pension technical specialists. We interviewed a representative of the American Academy of Actuaries, and also met with representatives of the Pension Benefit Guaranty Corporation (PBGC) and the Department of the Treasury. We reviewed various federal laws (e.g., the Pension Protection Act of 2006). We also reviewed key rules and regulations, such as relevant sections of the Federal Acquisition Regulation (FAR) (e.g., FAR section 31.205-6, Compensation for Personal Services), the Defense Federal Acquisition Regulation Supplement (DFARS) (e.g., DFARS Subpart 242.73, Contractor Insurance/Pension Review), and CAS (e.g., CAS 412, Cost Accounting Standard for Composition and Measurement of Pension Cost, and CAS 413, Adjustment and Allocation of Pension Cost). We reviewed DCMA documentation including guidance on forward pricing rates and final overhead rates, and reports written by the DCMA CIPR Center. We reviewed DCAA documentation such as relevant sections of the DCAA Contract Audit Manual (e.g., Chapter 8, Cost Accounting Standards), and audit reports that address contractor pension costs. We also reviewed prior GAO work concerning pensions. Further, to understand how DOD oversees pension costs at smaller contractors, we selected publicly traded contractors that: fell between the 11th and 50th places in terms of DOD contract obligations for fiscal year 2011; had total defined benefit pension plan assets of at least $1 billion; and had fiscal year 2011 DOD contract obligations representing at least 4 percent of total 2011 net sales. The following seven contractors met these criteria: Alliant Techsystems Inc. Computer Sciences Corporation Honeywell International Inc. ITT Exelis Navistar International Corporation Rockwell Collins, Inc. Textron Inc. At the seven smaller contractors we interviewed officials with pension management responsibilities. We also interviewed corporate-level DCMA officials with cognizance for the seven smaller contractors, and where available collected recent DCAA audit reports and CIPR Center reports related to pensions at those contractors. To compare the defined benefit pension plans of large DOD contractors to those sponsored by similar companies, we selected a peer group of 15 companies not among the largest DOD contractors based on analysis of contractor audited financial statements. Many of the contractors list a peer group they use to benchmark executive compensation in their financial statements. These peer companies may be selected for generally comparability in terms of company size, industry, or operations as well as their overall competitiveness with respect to similar employee skill sets and talent. Eight of the largest DOD contractors publish lists of their peers and we selected the 15 most prevalent companies (not including the DOD contractors themselves) that appeared across all eight lists. For both the contractors and the peer group, we analyzed annual reports and proxy statements for fiscal year 2011 to identify the status of pension plans and understand how pension plan assets are allocated. To identify trends in CAS pension costs, for the nine largest contractors with defined benefit plans we reviewed pension plan documents such as CAS valuation reports (generally certified by qualified and credentialed actuaries), summary plan descriptions, and CAS disclosure statements. We collected contractor data on incurred CAS pension costs from 2002 to 2011. Our analysis of CAS valuation reports identified sources of current unfunded liabilities and CAS pension cost, as well as the difference between ERISA-required contributions and what the contractors have calculated as CAS pension cost. Note that for one large contractor, we excluded most pension data associated with a business that was recently spun off, in which the transaction included parts of several defined benefit pension plans. This was done in order to make the contractor’s past and projected pension cost data more comparable. CAS pension costs provided by contractors may or may not reflect their PBGC premiums. Where we were able to identify the premiums separately from other pension costs, their relative size was insignificant. To understand how pension costs make their way onto DOD contracts, we selected divisions at the five largest contractors based on DOD contract obligations for fiscal year 2011, and at each division selected a weapon system program, which together represent a mix of military services and platform types. This selection was a nonprobability sample, and the findings from these programs are not generalizable to all programs. Those divisions and programs were: Boeing Space and Intelligence Systems—Wideband Global SATCOM General Dynamics Electric Boat—SSN 774 Virginia Class Submarine Lockheed Martin Aeronautics—F-35 Joint Strike Fighter Northrop Grumman Electronic Systems—AN/PED-1 Lightweight Raytheon Missile Systems—Tactical Tomahawk R/UGM-109E At the divisions, we interviewed contractor officials whose responsibilities included contracting and development of forward pricing rates. We were also briefed on how pension costs are incorporated into rates at each division. We interviewed DOD officials with cognizance at the division- level for the five selected divisions, including the DCMA Divisional Administrative Contracting Officer (DACO) and local auditors. For the five divisions, where available we collected contractor data on each division’s incurred pension costs from 2005 to 2011, and within each division, the individual programs’ incurred costs from 2005 to 2011. This period represents years for which data was generally available across selected programs. To demonstrate the potential impact on CAS pension costs of CAS/ERISA harmonization and changing economic assumptions, we developed a model of an illustrative contractor defined benefit plan, based on a review of the model DOD uses, and reviewed by the Chief Actuary of the GAO for actuarial soundness. For additional insight into the potential impact of harmonization, we gathered from the nine largest contractors projections (prior to and following harmonization and the Moving Ahead for Progress in the 21st Century Act (MAP-21)) of CAS pension costs for 2012 to 2016, where available. For the five selected divisions, we also gathered projections of pension costs for 2012 to 2016. We also interviewed the Project Director detailed to the CAS Board to lead the team that harmonized CAS with ERISA. We reviewed changes made to the CAS in December 2011 to harmonize it with ERISA. We also reviewed DOD policies related to CAS/ERISA harmonization, such as the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics memorandum providing guidance on harmonization. We conducted this performance audit from December 2011 to January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Cristina T. Chaplain, (202) 512-4841 or chaplainc@gao.gov; Charles A. Jeszeck, (202) 512-7215 or jeszeckc@gao.gov. In addition to the contacts named above, Karen Zuckerstein, Assistant Director; Kimberley Granger, Assistant Director; Robert Bullock; Robert Dacey; Charles Ford; Laura Greifner; John Krump; Gene Kuehneman; Kenneth Patton; David Reed; Matthew Shaffer; Roxanna Sun; Daren Sweeney; Aron Szapiro; Roger Thomas; Frank Todisco; and Jocelyn Yin made key contributions to this report.
DOD contractors are among the largest sponsors of defined benefit pension plans in the United States and factor pension costs into the price of DOD contracts. Since the 2008 market downturn, these pension costs have grown--thereby increasing DOD contract costs--and recent changes in rules for calculating pension costs have raised the prospect of further cost increases. Given this possibility, GAO assessed how (1) contractor pension costs are determined; (2) DOD ensures the contractor pension costs it pays are appropriate; (3) DOD contractors' defined benefit pension plans compare with plans sponsored by similar companies; (4) pension costs have affected DOD contract costs and the factors that contributed to these pension costs; and (5) the harmonization of CAS with ERISA will affect the amounts DOD will pay in pension costs in coming years. To do this, GAO analyzed defined benefit pension plans for the largest contractors; interviewed contractor and DOD officials; and reviewed relevant laws and regulations, including changes made to harmonize CAS with ERISA. Labor costs are included in the prices contractors negotiate with the Department of Defense (DOD), and include pension costs as these are a normal element of employee compensation. Contractors make two sets of calculations for their defined benefit pension plans, following two sets of standards: (1) Cost Accounting Standards (CAS), which determine how pension costs are allocated to government contracts; and (2) Employee Retirement Income Security Act of 1974 (ERISA), which establishes the minimum contribution required to fund plans. In 2008, revised ERISA rules altered the minimum funding requirements, causing CAS costs and ERISA contributions to diverge further apart. ERISA contributions have therefore greatly exceeded CAS pension costs reflected in contract prices. In December 2011, almost 4 years after ERISA changes took effect, the CAS Board, which is part of the Office of Management and Budget (OMB), made changes to CAS that harmonized them to ERISA in order to gradually reduce the difference between the two calculation methods. DOD centralizes its technical expertise for management and oversight of defined benefit pension plans. DOD contracting officers at the corporate level negotiate pension costs with contractors and receive technical support from a team of DOD actuaries. DOD audits projected and actual costs for contracts, including pension costs, to ensure they are allowable, allocable, and reasonable. The Federal Acquisition Regulation requires that employee compensation, including pensions, be reasonable. However, the pension costs used for compensation reviews can be affected not only by the value of benefits earned by employees, but also by factors such as asset returns and interest rates. Also, oversight processes do not clearly assign responsibility for assessing the reasonableness of pension benefits, including those for executives. GAO analyzed the defined benefit plans of the 10 largest DOD contractors and found that nearly all of the contractors--as well as a peer group of companies--maintain some sort of tax-qualified, defined benefit plan for their employees. The largest contractors invest in similar types of pension plan assets as their peer group, and do so somewhat more conservatively. GAO also found that CAS pension costs reported by the contractors grew considerably over the last decade, from less than $500 million in 2002 to almost $5 billion in 2011, although not all of these costs were allocated to DOD contracts. Contractor CAS pension costs grew as the market downturn increased unfunded liabilities. Although pension cost projections are highly sensitive to economic assumptions, both contractors and DOD officials expect CAS pension costs to increase starting in 2014 due to harmonization. The CAS discount rates used to value liabilities will now be tied to the more volatile ERISA-based rates, making it harder to forecast future CAS pension costs and reducing the consistency of cost projections used in contract pricing. DOD issued limited guidance on projecting ERISA-based discount rates for CAS calculations, but lack of specificity in the guidance can lead to great variation among the rates contractors use. Moreover, when a contractor curtails a plan, DOD and the contractor must settle pension costs; however, the discount rates used for settlements were not updated as part of harmonization, meaning liabilities will be calculated differently under CAS and ERISA rules. A schedule has not been set for addressing this issue. GAO recommends that the Secretary of Defense clarify responsibility for and guidance on assessing pension reasonableness and determining discount rates for pension cost projections. GAO recommends that the CAS Board set a schedule for revising the parts of CAS that address the settlement of plan curtailments. DOD agreed with the recommendations to the Secretary of Defense, and OMB said that when the CAS Board meets it will consider a schedule for revision.
FMCSA’s mission is to reduce injuries, fatalities, and the severity of crashes involving large commercial trucks and buses conducting interstate commerce. With more than 1,000 staff members at headquarters, 4 regional service centers, and 52 division offices (one in each state, Washington, D.C., and Puerto Rico), FMCSA carries out this mission by administering and enforcing federal motor carrier safety and hazardous materials regulations and by gathering and analyzing data on motor carriers, drivers, and vehicles, among other things. Division offices partner with state agencies to conduct a variety of motor carrier oversight activities carried out by certified auditors, inspectors, and investigators. These oversight activities are funded by Motor Carrier Safety Assistance Program grants, which totaled about $165 million in fiscal year 2010. FMCSA’s total budget for fiscal year 2011 was approximately $550 million. The interstate commercial motor carrier industry is large and dynamic. According to Department of Transportation data, there were more than 500,000 active interstate carriers and intrastate hazardous materials carriers in 2010, including about 66,000 new carriers that applied to enter the industry. The vast majority of these carriers apply as freight carriers. While the largest motor carriers operate upwards of 50,000 vehicles, 80 percent of carriers are small—operating between 1 and 6 vehicles. Fatalities due to accidents involving large trucks (including vehicles operated by both freight and household goods carriers) and buses (operated by passenger carriers) generally declined from 2000 through 2009. FMCSA officials attributed the declines to actions taken by the federal government, the motor carrier industry, and safety groups. Fatalities and the estimated fatality rate for large trucks and buses are shown in figure 1. In 2009, more than 3,600 people were killed in crashes involving large trucks and buses. FMCSA oversees two main groups of interstate motor carriers: (1) private carriers, who run an internal trucking operation to support a primary business in another industry, such as a retail store chain, and (2) for-hire carriers that sell their trucking services on the open market. Private and for-hire motor carriers seeking to operate in interstate commerce must register once with FMCSA, and thereby obtain a U.S. Department of Transportation (USDOT) number—a unique identifier used for collecting and monitoring safety information acquired during audits, compliance reviews, inspections, and crash investigations. USDOT numbers are issued after carriers submit information about their business, such as the name of the business and the company’s officers, a mailing address, business and cell phone numbers, the tax number (employer identification number or social security number) used to identify the business entity, and other information. For private carriers, this submission completes the registration process, and they can begin operating. In contrast, for-hire carriers must also obtain operating authority, which dictates the type of operation the carrier may run and the cargo it may carry.36,209 private carriers registered and 29,421 for-hire carriers applied for operating authority with FMCSA. Before the August 2008 bus crash in Sherman, Texas, FMCSA had no dedicated process to identify and prevent chameleon carriers from applying for and receiving operating authority. At that time, a carrier could take on a new identity by applying online for operating authority using the same information (business name, address, phone number(s), and company officer name(s), or other information) on file for the old carrier. FMCSA did not have a process to identify these applications and thus would have granted operating authority to an apparent new entrant after the carrier submitted the appropriate data. Immediately after the Sherman crash, FMCSA established the vetting program to review each new application for operating authority submitted by for-hire passenger carriers. Subsequently, in April 2009, FMCSA began to apply the vetting program to household goods carriers. Under this program, FMCSA conducts a two-step process: First, FMCSA uses a new applicant screening algorithm to electronically compare and match information contained in the carrier’s application to data for poorly performing carriers dating back This match information is used by a dedicated team (called to 2003.the vetting team) as indicators for further investigation. Second, the vetting team reviews each new for-hire passenger and household goods carrier’s application for completeness and accuracy and takes additional steps to determine whether the applicant is a chameleon carrier. For example, the team compares information in the application to information available on the Internet, including a carrier’s address; phone number; public filings with the state (e.g., articles of incorporation); and, if available, the company website. The vetting team also works with FMCSA division offices to take advantage of local officials’ knowledge of individual carriers. FMCSA’s ability to vet for-hire motor carriers that apply for operating authority stems from the Secretary’s statutory authority to withhold registration for operating authority from a carrier that does not meet federal safety fitness standards or is unwilling and unable to comply with all applicable statutes and regulations. It does not have this authority to vet and, therefore, potentially reject the registrations of private carriers, which may begin to operate as soon as they receive a USDOT number. If the computer-matching process or FMCSA division office review identifies a suspected chameleon carrier, FMCSA requests clarification from the applicant. If the carrier does not respond or the response indicates the applicant is attempting to become a chameleon carrier, FMCSA rejects the application. The entire vetting process, including the electronic matching and the application review, can take anywhere from a few weeks to more than 2 months depending on several factors, including how long it takes the applicant to respond to any FMCSA requests. After a carrier registers for a USDOT number, FMCSA uses the new entrant safety assurance program to examine all new entrants registered to operate in interstate commerce—including all for-hire and private passenger, household goods, and freight carriers—and intrastate hazardous materials carriers. Under this program, which began in 2003, carriers are required to undergo a safety audit within 18 months of obtaining a USDOT number and beginning interstate operations. The purpose of this audit is to determine whether carriers are knowledgeable about and compliant with applicable safety regulations. In 2009, FMCSA added a set of six, yes/no questions to the safety audit designed to elicit information indicative of any connections with other carriers to help the certified auditors and investigators that conduct these audits identify potential chameleon carriers. At the end of the audit, a carrier may pass or fail. If the carrier fails the audit, the carrier may continue to operate, but must submit a plan for corrective action. Upon receiving written confirmation that it has failed the audit, a carrier has between 45 and 60 days to provide an acceptable response or request an administrative review of the safety audit findings before the new entrant registration is revoked and the carrier is no longer permitted to operate in interstate commerce. FMCSA operates other programs that identify suspected chameleon carriers. For example, officials may identify suspected chameleon carriers during compliance reviews, which are in-depth examinations of carriers identified as high crash risks, or during roadside inspections of vehicles that include checks for compliance with driver and maintenance requirements. FMCSA has also implemented a new safety oversight initiative—the Compliance, Safety, Accountability program—under which it plans to introduce several new investigative programs, including targeted roadside inspections, off-site investigations, and on-site focused investigations. Like compliance reviews and roadside inspections, these new oversight programs may identify a suspected chameleon carrier during either a review or a follow-up review or inspection initiated to gather additional evidence on the carrier. Identifying a suspected chameleon carrier is the first step in determining whether the carrier is attempting to conceal its identity. FMCSA and state officials then conduct an investigation. When federal or state investigators or auditors first suspects that a carrier may be a chameleon, they work with officials in one of FMCSA’s 52 division offices and attorneys in four regional service centers to gather evidence and assemble the documentation needed to demonstrate that a new carrier is the same entity as a prior carrier and is attempting to evade a prior FMCSA enforcement action or a poor safety record. After gathering as much information as possible, a division office provides the evidence to a regional service center, where FMCSA attorneys decide whether to initiate a legal process in order to prove that the new carrier is responsible for the actions of the prior carrier (referred to as “corporate successor liability”). As part of their evaluation, the attorneys assess the strength of the evidence and give highest priority to those cases involving carriers with serious safety violations. If the attorneys determine that the evidence for a chameleon carrier case is insufficient, FMCSA does not pursue the case and the carrier continues its operations. The carrier is only recognized as a chameleon once FMCSA proves that the carrier is a chameleon based on the applicable legal standard or a carrier admits it created a new identity to evade detection. Once FMCSA gathers the necessary evidence against a chameleon, FMCSA issues a notice of claim to tie the history of the chameleon carrier to that of its predecessor. The notice of claim may include several enforcement actions, including ordering a carrier to cease operations— called out-of-service orders—for safety violations and failure to pay civil penalties. For example, one of the fines FMCSA assesses on chameleon carriers is for evading regulations, which ranges from $200 to $500 for the first violation and $250 to $2,000 for any subsequent violation, as established by regulation. FMCSA may assess higher civil penalties for carriers that are proven chameleons and can assess any unpaid penalties of the predecessor carrier to the successor carrier. FMCSA does not determine the prevalence of chameleon carriers because doing so would require extensive investigation of the tens of thousands of new applicants that register with FMCSA each year and, in some cases, the completion of a legal process. However, FMCSA, state enforcement officials, and industry and safety association representatives we interviewed offered general, varying impressions of the number of chameleon carriers in the motor carrier industry. For example, a number of FMCSA and state officials with whom we spoke believed that while the number of chameleon carriers is a relatively small proportion of new entrant carriers, it is also a serious or growing problem. In addition, groups of officials from Florida, Georgia, Illinois, and North Carolina stated that chameleon carriers are either a serious or a growing problem that they encounter regularly. Given the volume of new applicants and the necessary resources to investigate them, FMCSA uses the vetting program to focus its review of new applicants on two groups of carriers—for-hire passenger and household goods carriers. FMCSA has chosen to vet all applicants in these groups for two reasons: (1) according to officials, these two groups pose higher safety and consumer protection concerns than other carrier groups and (2) it does not have the resources to vet all new carriers and these two groups present a manageable number. As part of the vetting program, FMCSA uses registration data to compare information for every applicant in these two groups to information from previously registered carriers to identify any matches. Officials use these results to inform decisions about whether to grant operating authority to the applicants. According to FMCSA, however, data analysis by itself cannot positively identify chameleon carriers that are purposefully trying to evade oversight; matches do not always signify an issue. For example, vehicle data can match when new carriers legitimately have purchased and are using vehicles that were once owned by other carriers. Company names also can match when carriers independently selected the same name. Therefore, while data analysis is a helpful tool, FMCSA must conduct further investigation to determine the reasons for an apparent relationship between carriers and, unless the carrier admits to being a chameleon, undertake a legal process to determine whether the carrier is a chameleon. (Our assessment of the processes used to demonstrate a carrier is a chameleon is discussed later in this report.) While FMCSA’s exclusive focus on passenger and household goods carriers limits the vetting program to a manageable number, it does not account for the risk presented by chameleon carriers in the other groups that made up 98 percent of new applicants in 2010. In our view, data analysis can be used to target other types of new applicants—including freight carriers—that are more likely to be chameleons for further investigation as they register or apply for operating authority. While FMCSA only has statutory authority to accept or reject applications of for- hire motor carriers, examining all new applicant carriers, including private carriers, as they register for a USDOT number with FMCSA is important to provide officials with information about all carriers subject to their oversight activities. FMCSA and other federal agencies use data analysis to target entities or items with certain risk factors. Specifically, FMCSA uses state inspection and other data to identify carriers with a poor safety record for follow-up reviews. In addition, the Department of Homeland Security uses a targeting strategy, which includes a computerized model, to help select imported containers for additional review, inspection, or both. Regularly using data analysis for targeting new applicants would allow FMCSA to expand its examinations of newly registered carriers to include new applicants of all types using few or no additional staff resources, as discussed in the next section of this report. As we have previously reported, federal agencies need to assess the risks they face to determine the most effective allocation of federal resources, including how best to distribute resources for investigative and enforcement-related activities. To demonstrate that it is possible to use data analysis to target new applicants for further investigation, we developed a method and applied it to FMCSA data to identify carriers with chameleon attributes. We defined such carriers as those that met two criteria: 1. They submitted registration information that matched information for a previously registered carrier.2. The previously registered carrier had a motive for evading detection. We use the term “motive” to describe carriers that had a history of safety violations or filed for bankruptcy that might motivate a carrier to become a chameleon carrier. These criteria are similar to those FMCSA uses during the electronic matching step in the current vetting process for for-hire passenger and household goods carriers. However, we applied our method to all carriers and established a threshold for selecting new applicants for further investigation, whereas FMCSA limits its electronic matching to for-hire passenger and household goods carriers and does not have a mechanism or threshold for determining which new applicants to investigate further because it vets all the carriers in these two groups. An example of a carrier that met our criteria was a 2009 new applicant that had submitted registration information with the same company name, company officer, and phone number as a previously registered carrier that had been in a crash and ordered out-of-service by FMCSA. An example of a carrier that did not meet our criteria was a 2008 new applicant that matched a previously registered carrier on six different pieces of information—address, company name, company officer, Dun & Bradstreet number (a unique nine-digit number used to identify a business location), employer identification number, and phone number— but the previously registered carrier did not have a motive for evading detection, as defined by our criteria for this analysis. Because we were interested in demonstrating a method of targeting new applicants as they registered or applied for operating authority, and not specifically in counting the number of chameleons that might currently be operating, we did not attempt to exclude carriers that never operated or ceased to operate after they registered with FMCSA. This approach is consistent with the purpose of our analysis, which was to provide an objective, efficient means of identifying carriers that may warrant additional investigation as they enter the motor carrier industry, not specifically to identify chameleon carriers. For a detailed discussion of our data analysis method, see appendix II. Through our data analysis, we identified 1,136 new applicant carriers with chameleon attributes in 2010—an increase from 759 in 2005. During this 6-year period, carriers with chameleon attributes accounted for about 1.7 percent of the approximately 326,000 new applicants that registered and Of the carriers with were subject to FMCSA oversight activities.chameleon attributes, freight carriers made up about 94 percent, passenger carriers about 3 percent, household goods carriers about 2 percent, and carriers with authority to operate multiple carrier types (any combination of freight, passenger, and household goods) less than 1 percent. These percentages remained fairly stable over the 6-year period. Because freight carriers represented the majority of carriers, they showed the largest numerical increase of carriers with chameleon attributes, from 724 carriers with chameleon attributes in 2005 to 1,082 such carriers in 2010. (See table 1.) Although freight carriers accounted for 94 percent of the carriers with chameleon attributes that we identified, freight carriers also made up about the same percentage of all new applicants (about 93 percent). When we looked at the rates at which carriers of different types had chameleon attributes, we found that passenger carriers were more likely to have chameleon attributes than were carriers of other types. Specifically, over the 6-year period from 2005 through 2010, the percentage of new applicant passenger carriers with chameleon attributes was higher in every year (ranging from 1.9 to 3.3 percent) than the percentages for freight carriers (ranging from 1.6 to 1.9 percent) and household goods carriers (ranging from 0.6 to 1.2 percent). (See fig. 2.) One concern with our approach, which FMCSA raised in connection with our data-matching efforts as well as its own, is that the matching may not give an accurate picture of the total number of chameleon carriers for two reasons. First, data matching could identify carriers that have legitimate business reasons for registering a new company that appears to be related to an older one, and second, similar or even identical registration information may inadvertently or coincidentally be submitted by unrelated companies. We were able to address this concern in part by analyzing data about whether an older carrier had a motive to evade detection—information that we and FMCSA believe indicates that a new carrier is more likely to be a chameleon. In particular, we looked at the relative likelihood that an old carrier with and without a motive would match a new applicant. If data matches were only the result of carriers having legitimate business reasons for assuming a new identity or coincidental similarities in registration information, then we would expect old carriers with a motive to be no more likely to match new applicants than old carriers without a motive. In fact, however, we found that old carriers with a motive were roughly twice as likely to match a new applicant in 2009 or 2010 as were carriers without a motive. This suggests that the data-matching component of our analysis was effective in detecting carriers with chameleon attributes and not just carriers with legitimate reasons to assume new identities or coincidental similarities to previously registered carriers. While this test demonstrates that our method identified carriers with a motive to evade detection, further investigation would be needed to confirm whether any of the carriers on our list of carriers with chameleon attributes actually are chameleons. We believe using the two criteria of matching registration information and a motive to evade detection provides a sound basis for targeting. Using a risk-based, data-driven approach such as the one we outline would allow FMCSA to use available resources to target all types of carriers, including freight, and then periodically evaluate the effectiveness of the methodology and adjust its method based on the outcomes of follow-up investigations. Without such a method, FMCSA cannot target a manageable group of new applicant carriers of all types for investigation and possible enforcement action, an important caveat given FMCSA’s staffing levels. The carriers we identified as having chameleon attributes presented high safety risks relative to new applicants without these attributes. Through our analysis, we found that crashes involving carriers with chameleon attributes resulted in 217 fatalities and 3,561 injuries from 2005 through 2010.attributes were three times more likely than all other new applicant carriers to later be involved in a severe crash—one in which there was a fatality or injury. As table 2 shows, 18 percent of carriers with chameleon attributes were involved in a severe crash at some point between their time of registration and the end of 2010, compared with 6 percent of new applicant carriers without these attributes. In addition, carriers with chameleon attributes were three times more likely than all other new Moreover, 2005 through 2010 new applicants with chameleon applicant carriers to be assessed a fine by FMCSA for violating safety regulations. Specifically, 6 percent of carriers with chameleon attributes were assessed a fine at some point between their time of registration and the end of 2010, compared with 2 percent of the rest of the new applicant population. However, carriers with chameleon attributes were less likely than all other new applicants to be placed out-of-service for safety violations by FMCSA during this same period. FMCSA’s vetting program, established in August 2008 immediately following the Sherman, Texas, bus crash, is designed to assess the ability of an applicant for new operating authority to comply with FMCSA motor carrier safety regulations and, in part, to determine whether the new The program—which is FMCSA’s applicant may be a chameleon carrier.primary effort to identify chameleon carriers—is labor-intensive, according to officials, requiring detailed reviews of each application, national consumer complaint database queries, and outreach to division offices to obtain additional information about new applicants. Carriers that make it through the vetting process having met FMCSA’s standards for fitness, willingness, and ability to comply with all applicable federal statutes and regulations are granted operating authority. Reasons for denying operating authority include an assessment that a new applicant may be a chameleon carrier. Although the vetting program is labor-intensive, it is effective because it allows FMCSA to evaluate a carrier’s potential for compliance, including any indicators that the carrier may be a chameleon, before the carrier obtains operating authority. At this time the burden is on the carrier to provide FMCSA with any information it needs to evaluate the carrier’s application, and FMCSA can withhold operating authority from a carrier that it suspects of being chameleon. After a carrier obtains operating authority, however, FMCSA is required to gather evidence and prove that the carrier is a chameleon—a process that calls for significantly more resources, as discussed later in this report. Therefore, as FMCSA officials and safety advocates have observed, it is more effective for FMCSA to identify chameleon carriers up front through vetting than it is to pursue them after they have obtained operating authority. FMCSA recognizes the benefits of identifying chameleon carriers early, before they obtain operating authority. However, FMCSA officials stated they do not have the resources to vet all for-hire carriers that apply for new operating authority. Therefore, as noted, FMCSA focuses the vetting program on for-hire passenger and household goods carriers, which together account for about 2 percent of the approximately 66,000 new applicant carriers in 2010. FMCSA has selected these two types of carriers because it sees the chameleons among them as presenting risks to consumers. Specifically, crashes involving unsafe passenger carriers, such as the Sherman bus crash, may have multiple fatalities. In addition, passenger carriers with safety violations have a motive to become chameleon carriers to conceal their history of violations from consumers, as well as from FMCSA. Similarly, unscrupulous household goods carriers that have defrauded consumers, such as by holding their property hostage until they have paid more than agreed to have their property delivered, have a reason to become chameleon carriers to avoid association with complaints from defrauded consumers. Having a statutory consumer , FMCSA vets every for-hire passenger and protection responsibilityhousehold goods carrier so that consumers will have greater assurance when they buy bus tickets or contract with movers that the carriers they are dealing with are safe, honest, and comply with FMCSA regulations. From August 2008 through May 2011, FMCSA vetted 5,777 for-hire passenger and household goods carriers. Table 3 shows the results of FMCSA’s vetting program, including the number of carriers that were approved or rejected, withdrew, or switched their application to operate as a freight carrier rather than a household goods carrier. FMCSA officials believe, but cannot be certain, that some of these carriers withdrew or switched their application to avoid the vetting program. FMCSA officials credit the vetting program with helping to prevent and deter unsafe for-hire passenger and household goods carriers, which can include potential chameleon carriers, from obtaining operating authority. However our analysis found that the vast majority of passenger and household goods carriers do not have chameleon attributes and therefore FMCSA is using the majority of its program resources to vet carriers that may not represent a higher risk of being chameleons. At the same time, the current vetting program excludes 98 percent of all new applicants, such as all freight carriers as well as private passenger carriers. Moreover, according to our analysis, freight carriers present safety risks that are as great as or greater than those presented by passenger carriers. As discussed, freight carriers made up 94 percent of the carriers we identified with chameleon attributes from 2005 through 2010, and carriers with chameleon attributes were about three times more likely than all other new applicants to be involved in a severe crash or to be assessed a fine by FMCSA for a safety violation. In addition, according to 2009 Department of Transportation crash data, the number of fatalities per fatal crash is nearly the same for large trucks (1.13) as for buses (1.15), even though buses have more occupants. Furthermore, the number of people who died in truck crashes in 2009 (3,380) is more than 13 times greater than the number who died in bus crashes (254). (See fig. 3). As previously noted, federal agencies must assess the risks they face to determine the most effective allocation of federal resources, including the best distribution of resources for enforcement-related activities. Other federal organizations have reviewed the vetting program and recommended that FMCSA (1) show the program is effective and (2) use a risk-based approach to target its limited resources before expanding the program to all new freight carrier applicants. First, NTSB recommended that FMCSA add a performance evaluation component to the vetting program to show whether the new applicant screening algorithm is effectively preventing carriers with a history of evading safety requirements from continuing to operate.recommendation and is working to implement it. The results of the vetting FMCSA agreed with this program appear to indicate that it has value in preventing many carriers from obtaining operating authority, but its effectiveness remains to be determined. As our presentation of FMCSA’s data in table 3 shows, 1,408 of the 5,777 applicants for new operating authority were rejected and another 594 withdrew their applications. Second, the Department of Transportation’s Inspector General reported that expanding the vetting program to include freight carriers would require a risk-based approach, since FMCSA has limited resources to Our analysis suggests that a risk-based examine all new applicants.approach would allow such an expansion with few or no additional staff resources. Specifically, with six dedicated specialists, FMCSA vetted, on average, 175 for-hire passenger and household goods carriers per month from August 13, 2008, through May 18, 2011 (5,777 carriers divided by 33 months). Expanding the program to include all the freight carriers with chameleon attributes that we identified using our data-driven, risk-based approach would require FMCSA to vet, on average, an additional 74 freight carriers per month (5,329 freight carriers divided by 72 months), or a total of 249 carriers per month. If, for example, six specialists can vet an average of 175 carriers per month, or about 29 carriers per specialist, then eight to nine specialists (or two to three more specialists) should reasonably be expected to vet 249 carriers per month, on average, including all the passenger and household goods carriers that FMCSA currently vets, plus the freight carriers we identified with chameleon attributes. Alternatively, if FMCSA were to modify its current program and vet only carriers with chameleon attributes identified through data analysis, it could vet all passenger, household goods, and freight carriers with chameleon attributes using fewer specialists than it now uses. FMCSA officials stated that, given the safety risks associated with passenger carriers, they would be unwilling to exclude any of them from the vetting program. Yet no matter which approach FMCSA takes to vetting passenger carriers, the use of data analysis would allow it to expand the vetting program to include freight carriers with chameleon attributes and give FMCSA an early opportunity to detect and deny operating authority to freight carriers that pose safety risks. Newly registered motor carriers, including those that were vetted, are required to enter the new entrant safety assurance program and undergo a safety audit. This audit is mainly designed to educate new entrant carriers about federal motor carrier safety regulations, ensure they are able to comply with these regulations, and require them correct any deficiencies before continuing to operate. The audit now includes a set of six, yes/no questions that FMCSA added to the audit in 2009 to help auditors elicit information from new entrants about connections they may have with other carriers—a characteristic of chameleon carriers. These questions provide a cursory review of new entrants with regards to whether they may be chameleon carriers. The new entrant safety assurance program provides the first opportunity for FMCSA to assess freight and private passenger carriers, which are not currently vetted. The program does not, however, allow FMCSA to deny the new entrant registration of a carrier simply because it suspects that the carrier may be a chameleon. Instead, freight and private passenger carriers acquire provisional registration when they submit new entrant applications to FMCSA, often months before they undergo a safety audit, and it is not as easy for FMCSA to prevent them from operating as it is to deny operating authority to for-hire passenger and household goods carriers through the vetting program. FMCSA can place new entrant carriers out-of-service for at least 1 of 16 safety violations, but not because it suspects the carrier of being a chameleon. According to representatives responsible for safety audits in the states we contacted, the set of six, yes/no questions added to the safety audit helps raise new staff awareness of chameleon carriers and reminds more experienced staff to watch for them. Yet, they said the questions may not help them identify chameleon carriers because there is little guidance on how to use the questions. Specifically, FMCSA’s electronic Field Operations Training Manual—a guide that helps to standardize audits across all states and includes law enforcement best practices—provides instructions for staff to follow when conducting the safety audit, but contains no guidance for these questions, even though it includes guidance for all other questions asked during the audit. According to FMCSA, the computer application used during the safety audit—called SENTRI—provides some guidance on what constitutes an affiliation with another carrier and how to document responses to these questions. However, this guidance does little to help staff distinguish legitimate carriers from chameleons, does not provide follow-up questions that could help them make this distinction, and does not require them to collect any evidence that could be used during the enforcement process at a later date. As a result, staff lack direction on how to use the yes/no questions to distinguish a chameleon from a legitimate carrier, what follow-up questions to ask when carriers provide information, what documents to request from a suspected chameleon carrier, and how to document suspicions in the safety audit report that a carrier may be chameleon. The representatives told us the lack of guidance on how to use the questions made it difficult to distinguish chameleon from legitimate carriers. For example, according to representatives of Pennsylvania’s Bureau of Transportation and Safety, an auditor could mistakenly flag one carrier as a suspected chameleon for leasing vehicles from another carrier when the leasing can be a legitimate business transaction between the two companies. Florida Highway Patrol officers commented that a question about whether a carrier was affiliated with another was not useful because corporate officers may have legitimate professional associations with other corporate officers of other carriers. According to federal internal control standards, federal agencies, such as FMCSA, are to develop and clearly communicate guidance that flows from agency priorities. Without guidance for staff on how to use the six yes/no questions related to identifying chameleon carriers, FMCSA cannot ensure that the new entrant program will effectively identify such carriers. In commenting on our findings, FMCSA stated that as part of a larger effort to improve the new entrant program, it is reviewing the questions used to detect chameleon carriers during the safety audit process, which is where FMCSA believes the best impact can be made. In addition, FMCSA plans to ensure that all the questions are clear, including those used to identify chameleon carriers, and auditors understand how to answer them properly in order to obtain the best information. According to FMCSA, these efforts are to be completed by summer 2012, and will include associated guidance and training for all new entrant auditors. Once a motor carrier passes FMCSA’s new entrant safety audit, no other federal investigative program is specifically designed to identify chameleon carriers, including compliance reviews and roadside inspections, which are typically used to examine high-risk carriers. Compliance reviews examine carriers that have been identified as high crash risks through an assessment of accident reports or safety performance records. Roadside inspections check carriers for compliance with driver and vehicle maintenance requirements. Neither of these investigations is designed to identify chameleon carriers, but can incidentally lead to identifying such carriers. For example, safety investigators conducting compliance reviews or roadside inspectors have identified chameleon carriers because they happened to see documentation (e.g., a driver’s hours-of-service logbook or vehicle maintenance records) labeled with another carrier’s name, noticed the vehicle marked with another carrier’s name or USDOT number under a coat of fresh paint, or recognized a suspected chameleon carrier in the local area. During one roadside inspection in Florida, an inspector noticed a freight truck displaying a makeshift cardboard sign with the carrier’s name written in magic marker. The crude sign, along with the driver’s suspicious behavior, led the inspector to notify FMCSA, which determined the carrier was a suspected chameleon carrier. While such evidence may alert investigators to possible chameleon carriers, New York officers said that it is difficult to identify potential chameleon carriers during roadside inspections because drivers may not carry the documentation inspectors need to evaluate a carrier’s legitimacy. FMCSA faces several constraints in pursuing enforcement actions against suspected chameleon carriers. As a result of a 2010 decision by an FMCSA Assistant Administrator, it is not clear whether a state or a federal legal standard should be used by FMCSA to demonstrate that a carrier is a chameleon. This uncertainty can lead to differing enforcement actions across states and has increased the time necessary to pursue chameleon carrier cases. Other constraints include a resource-intensive legal process and limitations in FMCSA’s enforcement authorities. FMCSA is pursuing options to address these constraints. The lack of a single standard for demonstrating that a carrier is a chameleon—or, in legal terminology, the corporate successor of a previous carrier that assumed a new identity to evade detection— constrains FMCSA’s ability to take enforcement actions. The legal standard for determining corporate successor liability varies among states, and until 2006, FMCSA used the applicable state standard to determine liability. In a 2006 decision, an Administrative Law Judge applied a federal legal standard rather than a state standard to demonstrate corporate successor liability. However, a 2010 decision by an FMCSA Assistant Administrator left an open question as to which standard—federal or state—FMCSA should use to determine motor carrier successor liability. For a more detailed discussion of state corporate successor liability within the motor carrier industry, see appendix III. Absent a single federal legal standard, FMCSA attempts to gather evidence to meet both the federal standard and the state standards that could be applicable in a case. Applying multiple standards may lead to enforcement actions that differ from state to state and, according to FMCSA officials, gathering evidence to meet both the federal and the applicable state standard has increased the amount of time necessary to pursue enforcement actions against chameleon carriers. For example, FMCSA officials in the Southern Service Center told us that before the 2010 decision they spent 3 to 6 weeks pursuing several enforcement actions against chameleon carriers, but now spend between 6 to 12 months pursuing similar actions. The following illustrates how corporate successor liability laws vary among the states, resulting in enforcement actions that differ from state to state as some carriers may choose to incorporate in states where demonstrating corporate successorship is relatively difficult. Under Texas law, an acquiring entity may not be held responsible or liable for any liabilities of the transferring entity unless the acquirer FMCSA officials clearly assumes responsibility for the liabilities.recognize that it is difficult to pursue enforcement cases in Texas, unless the carrier admits to being a chameleon. It is also difficult to demonstrate corporate successorship in New York, according to FMCSA and state officials. For FMCSA to pursue a chameleon carrier case in New York, the prior carrier must have stopped operating before the new carrier started operating. If the two carriers operated concurrently at any point, FMCSA could have difficulty in pursuing the case under the New York standard. In Florida, the same people (officers, directors, and stockholders) must be involved in both the former and the current business for the carrier to be considered a chameleon. Suspected chameleon carriers may identify another person, such as a spouse or other relative, as the officer of the new company, making it difficult for FMCSA to pursue the case. However, FMCSA officials in the Midwestern and Eastern Service Centers stated that the 2010 decision by the Assistant Administrator did not greatly affect their pursuit of chameleon carrier cases because some of the state standards within their region (e.g., Pennsylvania, Illinois, and Michigan) generally mirror the federal standard. Therefore, collecting evidence to meet both the federal and applicable state standard only slightly increased the amount of evidence needed and had a minimal effect on the amount of work required to pursue chameleon carrier cases. Mitchell v. Suburban Propane Gas Corp., 581 N.Y.S.2d 927 (1992); Morales v. City of New York, 849 N.Y.S.2d 406 (2007). other suspected chameleon carriers may also pose risks and continue to operate because FMCSA does not have the resources to pursue enforcement actions against them. Specifically, FMCSA issues a NOC charging the suspected chameleon carriers with violating a federal regulation in effect against the carrier’s presumed predecessor, as shown in figure 4. The carrier can decide to pay the fine, contest the NOC, or fail to respond to the NOC. If the carrier fails to respond to the NOC, FMCSA orders the carrier out-of-service after 90 days. If the carrier contests the NOC, the process provides four alternative routes, each with a number of steps. If FMCSA is able to demonstrate that the suspected chameleon carrier and its presumed predecessor are the same entity, the process concludes with a final agency order, which allows FMCSA to take the enforcement actions identified in the order. For example, a final agency order may require the successor carrier to pay the fines owed by the predecessor carrier, adjust the successor carrier’s rating to reflect the entire history of the company, or order the successor carrier to cease operations. However, if at any point during the investigation or the NOC process the carrier admits to being a chameleon carrier, pays any penalties associated with violations, and comes into compliance, FMCSA can merge the carrier’s histories and records without going through the entire NOC process.Merging the carriers’ safety records helps ensure that FMCSA has an accurate account of the carrier’s safety record under one USDOT number for monitoring the carrier in the future. As figure 4 shows, several steps in the NOC process have time frames set for completion while others do not. The required time frames alone add up to several weeks or months, and the additional time that may be needed for the remaining steps, such as a formal hearing, can further prolong the process. The time taken to complete the NOC process varies widely. FMCSA officials said cases usually take weeks—from the NOC to the final agency order—but can take anywhere from months to years. According to state officials, as well as industry association and safety advocate groups, FMCSA has limitations on its authority that have hampered the effectiveness of its enforcement actions. Specifically, FMCSA cannot preclude carriers, including suspected chameleon carriers, from acquiring a new USDOT number. A new number allows a carrier to operate under a new identity and thus avoid any association with its history operating under another USDOT number, including any fines or out-of-service orders incurred under its former identity. FMCSA officials have stated that it is not illegal for a carrier to apply for multiple USDOT numbers because carriers may have legitimate business reasons for needing more than one number. For example, carriers that operate in different locations may want to separate their business practices across multiple routes or businesses. However, carriers that apply for multiple USDOT numbers may also do so to prevent or avoid subsequent detection as chameleon carriers. To strengthen its enforcement efforts against chameleon carriers, FMCSA stated that it is drafting a rule in response to a congressional mandate that would enable it to deny an application for operating authority of a for-hire motor carrier if any of the company’s officers has engaged in a pattern or practice of avoiding compliance, or concealing noncompliance with such regulations. It also stated that a recently issued Notice of Proposed Rulemaking would adopt new procedures for issuing orders to cease operations and consolidating safety records against chameleon carriers.both rules later this year. In addition, the maximum fines that FMCSA is legally permitted to impose on motor carriers, including chameleon carriers, are low, which constrains the agency’s ability to take enforcement actions. According to a recent NTSB report, the fines imposed on carriers for violations are low and do not serve as an effective deterrent. NTSB further concluded that the fines for serious violations are so low that some carriers, especially passenger carriers, may treat them as a cost of doing business. FMCSA and state officials, as well as industry association representatives, have also expressed concerns about the deterrent value of FMCSA’s fines. For example, a civil penalty that can be assessed against chameleon carriers, such as for evasion of regulations, ranges from $200 to $500 for the first violation and $250 to $2,000 for any subsequent violation. This penalty is potentially less than the cost to apply for operating authority, which is set at $300. FMCSA officials acknowledged that setting fines at the appropriate levels is a delicate balancing act. The fines must be high enough for carriers to view them as a deterrent and not simply as a cost of doing business, but not so high that carriers choose to become chameleons to avoid payment. Nonetheless, FMCSA is seeking legislation to increase the statutory fines, as discussed in the following section. To address constraints on its enforcement efforts and make it easier to identify chameleon carriers, FMCSA provided input to congressional committees on a legislative proposal. This proposal included language establishing a federal legal standard for determining corporate successorship that would set a single standard nationwide. This standard would expressly preempt state corporation successor laws applying only to federal motor carrier safety. According to FMCSA officials, the federal standard would be consistent with FMCSA’s mission to ensure motor carrier safety and would establish FMCSA’s authority over the chameleon carrier corporate succesorship issues. The federal standard would include specific criteria for determining what constitutes a successor carrier and would eliminate the need for FMCSA to apply various state laws in its chameleon carrier cases. Furthermore, a single nationwide standard would provide uniformity in FMCSA’s enforcement actions against chameleon carriers. In addition, such a standard could discourage carriers from incorporating their business in states where corporate successorship is difficult to demonstrate—a phenomenon that FMCSA officials suspect takes place now. For example, corporate successor liability is generally more difficult to prove in New York than it is in New Jersey and Pennsylvania, which may encourage carriers that understand the legalities of corporate successorship to consider reincorporating in New York. In addition, FMCSA is pursuing two other means to achieve a single federal legal standard. First, officials are monitoring chameleon carrier cases to identify one that could be used to clarify the 2010 Assistant Administrator’s decision. An Administrative decision indicating FMCSA should use a single federal standard would have a similar effect to congressional action included in FMCSA’s legislative proposal. Second, FMCSA is also pursuing a separate rulemaking effort to modify its enforcement regulations by codifying a single standard into regulation and by adopting expedited procedures for administrative adjudication of chameleon carrier cases. This rulemaking would articulate a standard that would be refined based on subsequent FMCSA decisions. The legislative proposal also includes changes that would increase the fines and penalties FMCSA is legally permitted to give carriers for noncompliance so that the penalties are not so low as to be viewed simply as a cost of doing business. For example, current law sets the minimum fine for evasion of regulation, which ranges from $200 to $500, would be increased to $2,000 to $5,000, and the maximum fine, which now ranges from $250 to $2,000, would be increased to $2,500 to $7,500. Other penalties associated with serious safety violations would also be increased. Preventing chameleon motor carriers from operating under a new identity is important because they present significant safety risks to the motoring public and, in the case of for-hire carriers, FMCSA faces constraints in removing them from the road after they have obtained operating authority. FMCSA has made strides toward protecting consumers from some of these unscrupulous carriers by vetting for-hire passenger and household goods carriers to identify and deny operating authority to those that may be chameleon carriers. However, these two types of carriers together accounted for only about 2 percent of the new motor carrier population in 2010, leaving the remaining 98 percent unvetted and free to operate before they undergo a new entrant safety audit—a program that provides some opportunity for auditors to identify potential chameleon carriers, but is not primarily designed to do so. Our analysis of FMCSA data found that of the more than 1,100 new motor carrier applicants in 2010 that had chameleon attributes, the vast majority were freight carriers. Given that the number of fatalities is far greater for freight carriers than for passenger carriers, we believe that FMCSA should not exclude freight carriers from its vetting program. Even with the large number of new applicant carriers and constraints on its resources, FMCSA could target the carriers that present the highest risk of becoming chameleons by using a data-driven, risk-based approach. Targeting could reduce the population of carriers to be vetted to a manageable number. FMCSA could choose to apply a data-driven, risk-based approach to all types of carriers, or could limit its use to freight carriers while continuing its current practice of vetting all for-hire passenger and household goods carriers. We believe that our targeting method, which considers both matching on registration information and having a motive to evade detection, provides a sound basis for FMCSA to select new applicant carriers for further investigation. Yet we also recognize that FMCSA will need to periodically evaluate the effectiveness of this approach as officials investigate carriers and learn more about the attributes of chameleon carriers. By applying a risk-based approach and expanding the vetting program to include freight carriers, FMCSA would help keep unsafe carriers off the road and reduce the amount of time, effort, and money necessary to investigate and prosecute chameleon carriers at a later date. In addition, FMCSA is not taking full advantage of the new entrant safety assurance program audit to identify potential chameleon carriers, including those that slipped through the vetting program and those that are freight carriers undergoing scrutiny for the first time. While the audit includes a set of questions designed to help auditors identify chameleon carriers, FMCSA’s electronic Field Operations Training Manual lacks guidance on how to use the questions during the audit to distinguish chameleons from legitimate carriers. For example, the guidance should prompt auditors on what types of follow-up questions to ask and what further evidence should be collected based on carrier’s responses. FMCSA is reviewing the new entrant audit questions, but unless the guidance contains such aspects, FMCSA lacks assurance that the new entrant auditors can effectively identify chameleon carriers. Absent a single standard for determining corporate successor liability, FMCSA can take months to develop a case to meet both a federal and the applicable state standard in order to prove that the carrier is a chameleon, and subsequently carry out enforcement actions. A federal standard would make the enforcement process parallel across all states, especially in states where FMCSA currently faces difficulties demonstrating corporate successor liability. A federal standard would also discourage carriers from incorporating across state lines to evade detection. FMCSA is currently exploring three different avenues for establishing a federal standard: (1) congressional action, (2) monitoring a case that could lead to the establishment of a single federal legal standard for chameleon carrier cases in all states, and (3) rulemaking. We support these efforts and believe establishing a federal standard is important to ensure a more efficient, consistent, and uniform enforcement process. To help FMCSA better identify chameleon carriers through its vetting program, the Secretary of Transportation should direct the FMCSA Administrator to take the following three actions: Develop a data-driven, risk-based vetting methodology that incorporates matching and motive components for targeting carriers with chameleon attributes. Using this new methodology, expand the vetting program as soon as possible to examine all motor carriers with chameleon attributes, including freight carriers. Periodically evaluate the effectiveness of this methodology using the results of investigations and refine as necessary. In addition, to help FMCSA identify chameleon carriers that present safety risks, FMCSA should strengthen the new entrant safety assurance program audit by developing guidance to the questions contained in the electronic Field Operations Training Manual designed to help the new entrant auditor identify chameleon carriers, including (1) how to use the questions to distinguish chameleon from legitimate carriers, (2) what types of follow-up questions to ask, and (3) what evidence to collect. We provided a draft of this report to the Department of Transportation for its review and comment. FMCSA generally concurred with our recommendations. In commenting on a draft of this report, officials provided additional information on how they plan to implement these recommendations, including developing plans to expand the vetting program to include for-hire freight carriers, but did not indicate when they would do so. FMCSA had several comments on our methodology for identifying carriers with chameleon attributes. Specifically, officials questioned the inclusion of currently inactive carriers—carriers that never operated or eventually ceased to operate in the motor carrier industry. The purpose of our analysis was to identify carriers that may warrant additional investigation as they apply to enter the motor carrier industry, not to identify the number of chameleon carriers that currently exist. Therefore, it would have been inappropriate to remove inactive carriers from our analysis. Officials also had methodological concerns about (1) using motive to select carriers with chameleon attributes, which could allow some chameleon carriers to go undetected, including those carriers that have consistently evaded FMCSA enforcement actions (i.e. carriers that take on new identities before FMCSA has an opportunity to document safety violations), and (2) including bankruptcy, which is not a safety violation, as one of our six motive criteria. However, as our report indicates, we believe that a risk-based targeting method that includes motives, such as bankruptcy, provides a sound basis for FMCSA to examine those carriers that are more likely than others to be chameleons. Yet we also recognize that FMCSA will need to evaluate the effectiveness of its approach and alter it, as necessary. In its comments, FMCSA agreed with us that using a risk-based approach to expand vetting to freight carriers, such as the one recommended, would require additional staffing resources. However, they indicated that such an approach would require more resources than the 2-3 staff we mentioned in the report. We believe that developing a risk-based approach to vetting is the first step FMCSA must take before determining the level of resources that may be needed for the vetting team. FMCSA also provided technical corrections, which we have incorporated throughout the report. We are sending copies of this report to congressional committees interested in motor carrier safety issues; the Secretary of Transportation; the Administrator of FMCSA; and the Director of the Office of Management and Budget. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix IV. Our objectives were to determine the prevalence of chameleon carriers, how well the Federal Motor Carrier Safety Administration’s (FMCSA) investigative programs are designed to identify suspected chameleon carriers, and what constraints, if any, FMCSA faces in pursuing enforcement actions against suspected chameleon carriers. To identify new applicant carriers with chameleon attributes, we conducted a data analysis that involved two basic steps: (1) comparing registration information submitted by new applicants against that provided by all existing motor carriers and (2) determining whether carriers had a motive for concealing their histories. We obtained this information from several U.S. Department of Transportation databases: the Motor Carrier Management Information System (MCMIS), the Licensing & Insurance system, and the Enforcement Management Information System, as of May 2011. To assess the reliability of these databases, we reviewed documentation on data collection efforts and quality assurance processes, talked with knowledgeable FMCSA officials about these data, and checked the data for completeness and reasonableness. We determined that the data were sufficiently reliable for the purpose of our data analysis. We analyzed data for new applicants from January 1, 2005, through December 31, 2010, against data for all carriers that had registered with FMCSA since June 1, 1974. For a detailed technical discussion of the scope and methodology for our data analysis, see appendix II. To determine how FMCSA’s investigative programs are designed to identify chameleon carriers, we reviewed federal motor carrier laws and safety regulations; federal internal control standards; related reports and statements published by GAO, the National Transportation Safety Board (NTSB), and the Department of Transportation’s Office of Inspector General; documentation about FMCSA’s vetting processes and procedures, which FMCSA refers to as the vetting program; FMCSA policy memorandums on the new entrant safety assurance program and the monitoring of potential chameleon new entrant motor carriers; and the Field Operations Training Manual. We also conducted a content analysis of all our interviews to obtain views from federal and states officials on the effectiveness of the vetting and new entrant safety assurance programs. In June 2011, we observed two new entrant safety audits—one in Los Angeles, California, of a new passenger carrier, and the other in Triangle, Virginia, of a new freight carrier. To identify the constraints FMCSA faces in pursuing enforcement action against suspected chameleon carriers and how it is addressing them, we reviewed federal motor carrier safety laws and regulations related to FMCSA enforcement actions (Notice of Claims and Notice of Violations); an FMCSA summary of State Successor Liability Case Law (July 2010), which describes corporate successor liability law for all 50 states; two key decisions related to corporate successor liability—the Williamson Transport decisions of January 2009 and July 2010; a multipage, corporate successor liability worksheet used to gather evidence against a suspected chameleon carrier; and a legislative proposal provided to congressional reauthorization committees in 2011 that is intended to help address FMCSA constraints. We performed a legal analysis of select case law to determine current FMCSA enforcement constraints. We also interviewed FMCSA counsel to determine how the legislative proposal would help alleviate those constraints. In addition, we reviewed other documentation, including publications and testimonies, to assess how FMCSA is addressing the constraints. To address these objectives, we interviewed FMCSA officials (data analysts, program managers, and counsel) in Washington, D.C.; Field Administrators, attorneys, managers and enforcement staff in all four regional service centers (Eastern, Southern, Midwestern, and Western); and Division Administrators in 10 of FMCSA’s division offices. In the same 10 states where we interviewed FMCSA division officials, we also interviewed law enforcement officials who were directly involved in attempting to identify or in taking enforcement actions against chameleon carriers. We selected these 10 states primarily because they had the largest total number of interstate and hazardous materials intrastate carriers identified in FMCSA’s Analysis and Information Resources database as of May 2011. In addition, we considered other factors in selecting these states, including the number of new entrant audits and roadside inspections conducted in fiscal year 2010, the estimated fatality rates per 100 million miles traveled in 2008, the level of participation in the Performance and Registration Information Systems Management and the new entrant safety assurance programs, suggestions made by FMCSA and by industry and safety organizations, and the legal requirements for determining corporate successor liability. Table 4 lists the 10 state agencies we interviewed. To address all three of our reporting objectives, we also interviewed representatives of the following organizations: Advocates for Highway and Auto Safety Commercial Vehicle Safety Alliance Motor Carriers Safety Advisory Council National Private Truck Council National Transportation Safety Board Owner-Operator Independent Drivers Association United Motorcoach Association We conducted this performance audit from March 2011 to March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix contains additional information on our analysis of data on carriers with chameleon attributes. The method presented here is used to demonstrate the feasibility of using data to target carriers with chameleon attributes. We did not conduct additional work to investigate the carriers we identified and determine whether our approach is the most effective means to target chameleon carriers. FMCSA may wish to consider adjusting several elements of this approach, including our standardization techniques, our match score formula, and the way we assessed motive to become a chameleon carrier. We defined a carrier with chameleon attributes as one that met the following two criteria: 1. Match criterion. The new applicant carrier submitted registration information that matched information for a previously registered carrier. 2. Motive criterion. The old carrier had a motive to become a chameleon, which we defined as a history of safety violations or filed for bankruptcy that might motivate a carrier to become a chameleon. To identify new applicants with chameleon carrier attributes, we took two basic steps: (1) compared registration information submitted by new applicants against that provided by all previously registered motor carriers (match criterion) and (2) determined whether the previously registered carriers had a motive for concealing their histories (motive criterion). We used information from the following Department of Transportation databases: MCMIS, the Licensing and Insurance system, and the Enforcement Management Information System, as of May 2011. To create our population of motor carriers that had submitted registration information to the department, we used data from MCMIS to generate a list of all unique U.S. Department of Transportation (USDOT) numbers (i.e., motor carriers) that had ever registered with the Department of Transportation, including the date that these USDOT numbers were added to the database (add date) and the most recent date that the carrier entered the new entrant program (new entry date). Because we were interested in demonstrating a method of targeting new applicant carriers as they registered or applied for operating authority, and not specifically in counting the number of chameleon carriers that might currently be operating, we did not attempt to exclude carriers that might be inactive or might have ceased to operate. Therefore, our list of carriers with chameleon attributes likely includes carriers that are no longer operating. We selected a number of data fields on which to compare new carriers to all previously registered carriers. Initially we considered the following fields: carrier name, company officer name, employer identification number (EIN), social security number (SSN), Dun & Bradstreet (D&B) number, phone number (includes all possible comparisons among cell, fax, and main numbers), address (includes physical and mailing), vehicle identification number, vehicle license plate, driver license number, and driver name. Based on conversations with FMCSA officials and an initial analysis of the frequency of matches across these different fields, we selected seven fields that we believe can be used to identify carriers with chameleon attributes: carrier name, company officer name, EIN, SSN, D&B number, phone number, and address. We took several steps to improve the validity of our matches. We standardized values in some fields, including addresses and names. We also excluded records with missing or unusable values on key variables. For example, we excluded records with missing values on any of our match variables (listed earlier). For a number of the variables, we also excluded records consisting of a single character or digit, records with values consisting entirely of zeros or nines, and records with values that would result in matches unrelated to chameleon attributes (e.g., used terms like “unknown,” “none,” and “n/a”). Table 5 provides more details on the standardization and cleaning we conducted. If two carriers had an exact match on at least one of these data fields, we then added them to our list of “carrier match pairs.” Within the pairs, we coded each USDOT number as either a new carrier or an old carrier based on the date that the USDOT number was added to the database. In a number of instances, a new carrier matched more than one old carrier. Because we were interested in identifying new carriers with chameleon attributes, and not in counting the number of older carriers to which they matched, we took just the strongest match for each new carrier and discarded the others. We calculated the strength of each match using a weighting formula through which we assigned different weights to different fields. Our weighting formula was based on (1) conversations with FMCSA and state officials who indicated that certain data field matches were more likely to indicate that a carrier was potentially a chameleon and (2) an evaluation of data fields that carriers matched on. Based on these sources of information, we derived a formula in which the seven data fields were weighted and combined in the following way: Match score = (carrier name x company officer name) + 2(SSN) + 2 (EIN) + 2 (D&B number) + phone + 0.5(address) In this formula, each of the variables is coded 1 if the two carriers match on the corresponding data field and 0 otherwise. Thus, for example, if a new carrier matched an old carrier on company officer, company name, SSN, and phone, the new carrier would receive a match score of (1 × 1) + 2 + 1 = 4. Alternatively, if a new carrier matched an old carrier on carrier name and address, but not on company officer name (or any other fields), the new carrier would receive a score of (1 × 0) + 0.5 = 0.5. Note that because of how carrier name and company officer name are combined in the formula, neither of these fields counts toward a match unless matches on both fields are present. After completing our match of registration information, we coded each carrier in the MCMIS universe according to whether it might have a motive to evade detection, which meant having at least one of the following attributes: filed for bankruptcy; involved in a severe crash; fined by FMCSA; or issued an out-of-service order, an imminent hazard order, or an unsatisfactory or unfit rating by FMCSA. We selected these attributes based on discussions with FMCSA officials indicating that they are possible reasons that a carrier might attempt to become a chameleon and are attributes that FMCSA used for creating a list of poorly performing carriers within its new applicant screening algorithm. Because we did not have evidence indicating that any one motive was more likely to result in a carrier becoming a chameleon we weighted all motives equally. That is, the motive criterion was binary—a carrier either had a motive or did not have a motive. In addition, we counted a carrier as having a motive only if the first appearance of the motive predated the new carrier’s registration with FMCSA. For example, a filing for bankruptcy was counted as motive only if the old carrier filed for bankruptcy before the new carrier registered. However, we were unable to determine whether a motive, having initially appeared, was still present at just the time when the new carrier registered. For example, FMCSA may have rescinded an out-of-service order on an old carrier before the new carrier attempted to register, and our data analysis did not specifically exclude these types of cases. We incorporated motive to evade detection into our analysis in three distinct ways. First, we used motive to assess whether the matching component of our analysis was identifying carriers with a reason to be a chameleon, as opposed to carriers with legitimate reasons to reincarnate and carriers with registration information accidentally resembling an older carrier’s. Second, we used motive to select a particular match score threshold to be used in our definition of a carrier with chameleon attributes—that is, a match score (calculated according to the above formula) beyond which we classify a carrier as meeting the match criterion. Finally, as noted earlier, motive was a component, separate from matching, of our definition of a carrier with chameleon attributes. In the following sections, we discuss these three uses of motive. One concern with our approach is that data matching may not give an accurate picture of the total number of chameleon carriers for two reasons. First, data matching could identify carriers that have legitimate business reasons for registering a new company that appears to be related to an older one. Second, similar or even identical registration information may inadvertently be submitted by unrelated companies. In order to address this issue, we used information about whether an older carrier had a motive to evade detection—a feature that we and FMCSA believe indicates that a new carrier is more likely to be a chameleon than a carrier without such a feature. In particular, we looked at the likelihood that an older carrier with a motive would match a new applicant, as compared to the likelihood that an older carrier without a motive would match a new applicant. If the only causes of data matches were carriers that had legitimate business reasons for assuming a new identity and accidental similarities in registration information, then we would expect older carriers with a motive to be no more likely to match new applicants than older carriers without a motive. However, if matches do occur because of chameleons registering, then we would expect older carriers with a motive to be more likely to match new applicants than older carriers without a motive. We formalize this reasoning as follows: Using these formulas in conjunction with several different match score thresholds, we found that a difference in the likelihood of a match for carriers with a motive and those without depended on the particular match score threshold that was used (see tables 6 and 7). In table 6, the number in the final column, R, can be interpreted as follows: when we used a match score threshold of 1.0 (see the first row of the table), pre-2009 carriers with a motive were 2.1 times more likely to match a new applicant in 2009 than were pre-2009 carriers without a motive. Similarly, when we used a threshold of 1.5, pre-2009 carriers with a motive were 2.6 times more likely to match a new applicant in 2009 than were pre-2009 carriers without a motive. As shown in table 7, we conducted a similar analysis for 2010: As the tables show, the difference in likelihood between carriers with a motive and those without depended on the particular match score threshold that we used. For both 2009 and 2010, we tested a range of match score thresholds (from 1.0 to 2.5), and in all cases carriers with motive were statistically significantly more likely to match a new applicant than were carriers without motive. These results suggest that the matching component of our analysis did not merely detect accidental or benign matches, such as carriers that registered a new company for legitimate business reasons, but rather identified carriers seeking to evade detection. Specifically, if matches occurred only for benign or accidental reasons, then we would expect matching to be no more likely among carriers with a motive than among carriers without. That is, we would expect R to be near 1.0. In fact, we found that older carriers with a motive were roughly twice as likely to match a new applicant in 2009 or 2010 as were older carriers without a motive. This suggests that the data-matching component of our analysis was effective in detecting carriers with chameleon attributes and not just carriers with legitimate reasons to assume new identities or accidental similarities to previously registered carriers. While this test demonstrates that our method identified carriers with a motive to evade detection, further investigation would be needed to confirm whether any of the carriers on our list of carriers with chameleon attributes actually are chameleons. Having verified that data matching, as defined in our analysis, was related to motive, we then used motive to select a match score threshold. Our goal was to identify a match score threshold that was high enough to avoid capturing many “false alarms”—that is, matches that occur for accidental or benign reasons—and yet low enough so that our matching criterion was not overly restrictive. To identify such a match score threshold, we tested several different thresholds to identify the one with the strongest relationship between whether an older carrier had a motive and whether it matched a new applicant in 2009 or 2010. As the tables above show, the highest value of R occurred at a threshold of 1.5 for both 2009 and 2010. Based on this analysis, we selected a 1.5 match score as the optimal threshold. That is, the degree of match between the two carriers’ registration information had to exceed the defined threshold of 1.5 for the new carrier to be classified as having chameleon attributes. Having used motive to refine the matching component of our definition, we also used motive as a second component, in its own right, of a carrier with chameleon attributes. Only if a carrier met both the match criterion and the motive criterion was it classified as a carrier with chameleon attributes. To determine carrier type—freight, passenger, household goods, or multiple—we requested guidance from FMCSA officials. Following this guidance, we took the following steps: (1) identified carrier types (freight, passenger, household goods, and multiple operating authorities) for for- hire carriers using the demo carrier summary table in the Licensing and Insurance database; (2) identified private carriers with passenger and household goods operating authorities using the operation classification, cargo classification, and carrier equipment tables in the MCMIS database; and (3) classified the remaining carriers as private freight carriers. For some part of our analysis, we combined for-hire and private carriers to yield four categories: passenger, household goods, freight, and multiple (where “multiple” included any combination of passenger, household goods, and freight). FMCSA takes a series of steps to investigate whether a new carrier is a chameleon—or in legal terminology, the corporate successor of a previous carrier that assumed a new identity to evade detection by the agency. Once FMCSA identifies a carrier as a potential chameleon carrier that was either ordered out-of-service or had enforcement action taken against it, FMCSA must demonstrate, by law, that the new carrier is the “corporate successor” of the old carrier in order for the liability of the old entity to attach to the new carrier. This linkage allows FMCSA to deny or revoke operating authority or take enforcement action against the new carrier. The traditional common law rule of corporate successor liability states that a corporation that acquires all or part of the assets of another corporation does not acquire the liabilities and debts of the predecessor. However, there are four traditional and widely accepted exceptions to this rule. The majority of states follow the traditional rule for successor liability, subject to the four traditional exceptions. There is also a federal rule used to determine corporate successorship. See, e.g., Bud Antle v. Eastern Foods, 758 F.2d 1451, 1456 (11th Cir. 1985); Mozingo v. Correct Mfg. Co., 752 F. 2d 168, 174 (5th Cir. 1985). Williamson Transport Co., Inc., Docket No. FMCSA-2004-17247 (March 2006). standard was not the proper test for determining motor carrier successor liability, and that state law should have been used instead. However, in response to a petition for reconsideration, the Assistant Administrator found that “it is not necessary in this case to determine whether the standard…should be the traditional common law, the particular state law, or the federal doctrine of ‘substantial continuity,’” because the claimant (FMCSA) did not succeed under any standard. This decision left an open question as to which standard FMCSA should use to determine motor carrier successor liability. The Federal Standard. The federal doctrine of “substantial continuity” is an eight-pronged, judicially created test that attaches liability to a successor company if it (1) retains the same employees, (2) retains the same supervisory panel, (3) retains the same production facilities in the same location, (4) continues producing the same products, (5) retains the same name, (6) maintains continuity of assets, (7) maintains continuity of general business operations, or (8) holds itself out the public as a continuation of the previous corporation.that not all of these prongs need to apply in a given case, but rather that these are the different factors that are weighed equally in determining whether “substantial continuity” is established. State Standards. State corporate successor liability laws vary from state to state, based either on case law within the state or, in some instances, state legislation. Most jurisdictions recognize the traditional rule for successor liability, also referred to as the common law rule, as their state standard. This rule states that a corporation that acquires all or part of the assets of another corporation does not acquire the liabilities and debts of the predecessor, subject to several exceptions.also recognize four traditional exceptions: 1. The purchasing company explicitly or implicitly agrees to assume the debts or liabilities of the seller. 2. The transaction amounts to a consolidation or merger (or “de facto merger”). 3. The successor entity is a mere continuation of the predecessor entity (“mere continuation”). In most states, the key elements of mere continuation are a common identity of the officers, directors, and stockholders between the predecessor and successor. This exception is aimed at owners or directors who may dissolve one company and begin another to avoid debts and liabilities. 4. The transaction was entered into fraudulently in order to escape liability. See, e.g., Bud Antle, Inc., 758 F.2d at 1456; Travis v. Harris Corp. 565 F.2d 443, 447 (7th Cir. 1977); Leannais v. Cincinnati, Inc. 565 F.2d 437, 439 (7th Cir. 1977); Ray v. Alad Corporation, 560 P. 2d 3, at 7 (Cal. 1977). enforcement actions. These exceptions delineate elements that must be met in order for the exception to apply and for liability to attach to the new corporation. FMCSA officials have told us that the agency typically uses the “mere continuation” theory to attach liability to the successor carrier, but other theories (such as “de facto merger” or “fraud”) may be used. As noted previously, most jurisdictions follow the traditional principle of successor liability along with the four traditional exceptions. example, Florida, Georgia, Illinois, New York, and North Carolina have adopted the traditional rule of successor liability and the four traditional exceptions. In addition, a limited number of states have adopted a nontraditional exception, the “continuity of enterprise” exception. See, e.g., Amjad Munim, 648 So.2d at 151 (recognizing that Florida follows the vast majority of jurisdictions in honoring the traditional rule of corporate successor liability); Vernon v .Schuster, 179 Ill.2d 338, 345 (Ill. 1977) (stating that the traditional rule, along with the four exceptions, is recognized in the majority of American jurisdictions). The “continuity of enterprise” exception uses factors similar to those used in the federal “substantial continuity” standard. Factors other than the traditional ones that are typically taken into account under this exception are (1) retention of the same employees, (2) retention of the same supervisory personnel, (3) retention of the same production facilities in the same physical location, (4) production of the same product, (5) retention of the same name, (6) continuity of assets, (7) continuity of general business operations, and (8) whether the successor holds itself out as the continuation of the previous enterprise. In addition, there are states that have enacted legislation in place of traditional common law rules and exceptions. For example, Texas has enacted a statutory provision overriding the traditional rules and exceptions. Under Texas law, an acquiring entity may not be held responsible or liable for any obligations or liabilities of the transferring domestic entity unless they are expressly assumed by the person. 8 provides a list of the successor liability laws in the 10 states we examined. Tex. Bus. Orgs. Code Ann. § 10.254(b). See Ford Bacon & Davis, L.L.C. v. Travelers Insurance Co., 635 F.3d 734 (5th Cir. 2011)(applying revised Texas statute in refusing to adopt product line exception). See also C.M. Asfahl Agency v. Tensor, Inc., 135 S.W.3d 768 (Tex. App. 2004) (applying statute and finding no liability because it was not expressly assumed by the successor). As stated previously, Texas has adopted a statute that only holds an acquiring entity liable for any obligations or liabilities of the transferring domestic entity when they are expressly assumed by the person. Tex. Bus. Orgs. Code Ann. § 10.254(b). In addition to the individual named above, H. Brandon Haller (Assistant Director), Russ Burnett, Lauren Calhoun, Matt Cook, Bess Eisenstadt, Colin Fallon, David Hooper, Cathy Hurley, Steve Martinez, Anh Nguyen, and Josh Ormond made key contributions to this report.
The Federal Motor Carrier Safety Administration’s (FMCSA) mission is to ensure motor carriers operate safely in interstate commerce. FMCSA partners with state agencies to conduct a variety of motor carrier oversight activities, which are carried out by certified auditors, inspectors, and investigators. Some motor carriers have registered under a new identity and begun to operate in interstate commerce, violating federal law in an effort to disguise their former identity and evade detection by FMCSA. Such carriers are known as chameleon carriers. GAO’s objectives were to examine (1) the prevalence of chameleon carriers; (2) how well FMCSA’s investigative programs are designed to identify suspected chameleon carriers; and (3) what constraints, if any, FMCSA faces in pursuing enforcement actions against suspected chameleon carriers. To address these objectives, GAO analyzed data on new applicants; reviewed investigative program guidance, federal motor carrier laws and regulations, GAO and other reports, and selected state corporate successor liability laws; observed two new entrant safety audits; and interviewed FMCSA headquarters and field officials, state officials—including law enforcement agencies—and motor carrier stakeholders. FMCSA does not determine the total number of chameleon carriers within the motor carrier industry. Such a determination would require FMCSA to investigate each of the tens of thousands of new applicants that register annually and then complete a legal process for some of these suspected chameleon carriers, an effort for which FMCSA does not have sufficient resources. Rather, FMCSA’s attempt to identify chameleon carriers among new applicants, referred to as the vetting program, is limited to bus companies (passenger carriers) and movers (household goods carriers). These two relatively small groups, representing only 2 percent of all new applicants in 2010, were selected because they present consumer protection and relatively high safety risks. Through the vetting program, FMCSA conducts electronic matching of applicant registration data against data on existing carriers and investigates each application from these two small groups, but does not determine whether all other new applicants, including freight carriers, may be attempting to assume a new identity. Federal internal control standards direct agencies to assess the risks they face to determine the most effective allocation of federal resources, including how best to distribute resources for activities such as investigations and enforcement. GAO demonstrated how analysis of registration data can be used to assess risk by targeting all new applicant carriers that have attributes similar to those of chameleon carriers—for example, company registration data that match data for another carrier with a history of safety violations. Using FMCSA data, GAO found an increasing number of carriers with chameleon attributes, from 759 in 2005 to 1,136 in 2010. GAO also found that 18 percent of the applicants with chameleon attributes were involved in severe crashes compared with 6 percent of new applicants without chameleon attributes. FMCSA’s investigative programs—the vetting and new entrant safety assurance programs—are not well designed to identify suspected chameleon carriers. The vetting program assesses all passenger and household goods carriers applying for operating authority, but it does not cover other groups of carriers, including freight truck carriers, which represented 98 percent of all new motor carrier applicants in 2010 and were more likely to be involved in fatal crashes than passenger carriers. The new entrant safety assurance program—which involves a safety audit for all new entrants, including freight carriers—entails a brief assessment of whether a carrier may be chameleon, but is primarily designed to educate new entrants about federal motor carrier safety regulations. The safety audit includes questions to elicit information on connections between new and previous carriers, but auditors lack necessary guidance on how to interpret the responses to distinguish chameleon carriers from legitimate carriers. FMCSA faces several constraints in pursuing enforcement actions against suspected chameleon carriers. For example, as a result of a 2010 decision by an FMCSA Assistant Administrator, it is unclear whether FMCSA should use a state or a federal legal standard to demonstrate that a carrier is a chameleon. Thus, evidence is gathered to meet both a state and federal legal standard, which can lead to differing enforcement actions across states and has increased the time necessary to pursue chameleon carrier cases. FMCSA is pursuing several options to achieve a single standard, including providing input to Congress on a legislative proposal, monitoring chameleon carrier cases that could clarify the 2010 decision, and pursuing a separate rulemaking. Other constraints on FMCSA enforcement actions include a resource-intensive legal process, the inability to preclude carriers from obtaining multiple registration numbers, and low maximum fines. FMCSA should expand the vetting program using a data-driven approach; and provide guidance to improve the new entrant program. FMCSA generally concurred with our recommendations.
In 2005, Hurricane Katrina dramatically illustrated the adverse consequences that can occur when the nation is unprepared to respond effectively to a catastrophic disaster. Emergency preparedness strengthens the nation’s ability to prevent, protect, respond to, and recover from a natural disaster, terrorist attack, or other man-made disaster. It has received widespread attention and support from Congress, the President, and the Secretary of Homeland Security as manifested by legislation, presidential directives, the development of DHS policy documents, and grants to state and local governments. The lessons learned from the terrorist attacks of 9/11 and Hurricane Katrina focused attention on the need for preparedness programs that could (1) guide decisions on how to improve policies and plans that define roles and responsibilities across the broad spectrum of governmental and nongovernmental organizations involved in prevention, protection, response, and recovery activities and (2) help managers prioritize the use of finite resources to narrow gaps in needed capabilities. FEMA—a component in DHS—is the federal agency responsible for leading the nation’s preparedness activities. In December 2003, the President issued guidance that called on the Secretary of Homeland Security to carry out and coordinate preparedness activities with public, private, and nonprofit organizations involved in such activities. In the wake of the problems that marked the response to Hurricane Katrina in 2005, Congress passed the Post-Katrina Act in October 2006. The act strengthened FEMA’s role within DHS and defined FEMA’s primary mission as: “to reduce the loss of life and property and protect the Nation from all hazards, including natural disasters, acts of terrorism, and other man-made disasters, by leading and supporting the nation in a risk-based, comprehensive emergency management system of preparedness, protection, response, recovery, and mitigation.” The act required FEMA to establish a national preparedness system for ensuring that the nation has the ability to deal with all hazards, including those incidents with catastrophic consequences. Among other things, the act directs FEMA to provide funding, training, exercises, technical assistance, planning, and other assistance to build tribal, local, state, regional, and national capabilities (including communications capabilities) necessary to respond to any type of disaster. It also requires FEMA to develop and coordinate the implementation of a risk-based all-hazards strategy for preparedness that builds those common capabilities necessary to any type of disaster, while also building the unique capabilities necessary to respond to specific types of incidents that pose the greatest risk to the United States. The act includes a number of other specific requirements including the development of quantifiable performance measurements to support each component of the national preparedness system, such as capabilities assessment, training, and exercises. The system provides a basis for improvements in policies, plans, and capabilities that aim to save lives and protect and preserve property. DHS has defined national preparedness as a continuous cycle that involves four main elements: (1) policy and doctrine, (2) planning and resource allocation, (3) training and exercises, and (4) an assessment of capabilities and reporting. The following is a brief description of each element of the system. Policy: This element involves ongoing management and maintenance of national policy and doctrine for operations and preparedness. Disaster response is primarily handled by local or tribal governments with the state and federal governments and private and nonprofit sectors playing supporting and ad hoc roles, respectively, as needed or as requested. One of the lessons learned from Hurricane Katrina is that a lack of clarity regarding roles and responsibilities across these levels of government and sectors can result in a less coordinated national response and delay the nation’s ability to provide life-saving support when needed. Broadly speaking, FEMA’s role, in cooperation with other federal and nonfederal entities, is to define the roles and responsibilities for all response stakeholders so that each understands how it supports the broader national response. This approach calls for a national response based on partnerships at and across all levels. Planning and resource allocation: This element involves application of common planning processes and tools by government officials, working with the private sector, nongovernmental organizations, and individual citizens to identify requirements, allocate resources, and build and maintain coordinated capabilities that are prioritized based upon risk. Among other things, this element involves developing planning processes so that roles and responsibilities of stakeholders are clearly defined for specific homeland security scenarios, such as a hurricane or a terrorist attack involving nuclear or radiological weapons. Training and exercises: Exercises provide opportunities to test plans and improve proficiency in a risk-free environment. Exercises assess the adequacy of capabilities as well as the clarity of established roles and responsibilities. Short of performance in actual operations, exercise activities provide the best means to evaluate returns on homeland security investments. The Post-Katrina Act requires FEMA to carry out a National Training Program and National Exercise Program. On January 26, 2007, the National Security Council and the HSC approved the establishment of a new iteration of the National Exercise Program to conduct exercises to help senior federal government officials prepare for catastrophic crises ranging from terrorism to natural disasters. Well-designed and executed exercises can improve interagency coordination and communications, highlight capability gaps, and identify opportunities for improvement. Tracking corrective actions resulting from exercises is a key step in the process. Assessing capabilities and reporting: According to the Post-Katrina Act, FEMA is required to develop a comprehensive assessment system to assess the nation’s prevention capabilities and overall preparedness. A key part of the system involves the development of quantifiable standards and metrics—called target capabilties—that can be used to assess existing capability levels compared with target capability levels. The act requires FEMA to include the results of its comprehensive assessments in an annual Federal Preparedness Report to Congress. To assist in this effort, FEMA is to receive annual State Preparedness Reports from all 50 states, the District of Columbia, and 5 territories that receive DHS preparedness assistance. FEMA’s National Preparedness Directorate has primary responsibility for carrying out the key elements of the national preparedness system, in coordination with other federal, state, local, tribal, nonprofit, and private- sector organizations. The directorate includes the National Integration Center and the Office of Preparedness Policy, Planning, and Analysis (PPPA). The National Preparedness Directorate and FEMA’s Disaster Operations Directorate share responsibility for ensuring plans that describe roles and responsibilities are developed. FEMA’s National Exercise Division—a division of the National Integration Center—leads exercise activities. Finally, PPPA is responsible for assessing capabilities. See figure 2 for an organizational chart of select FEMA components involved in preparedness programs. FEMA faces two main challenges in developing and integrating the elements of the national preparedness system. For example, see GAO, National Response Framework: FEMA Needs Policies and Procedures to Better Integrate Non-Federal Stakeholders in the Revision Process, GAO-08-768 (Washington, D.C.: June 11, 2008) and Voluntary Organizations: FEMA Should More Fully Assess Organization’s Mass Care Capabilities and Update the Red Cross Role in Catastrophic Events, GAO-08-823 (Washington, D.C.: Sept. 18, 2008). See Homeland Security Advisory Council, Top Ten Challenges Facing The Next Secretary of Homeland Security (Washington, D.C.: Sept. 11, 2008). Among other things, this report determined that the work of strengthening disaster response capabilities is incomplete, in part, because DHS will need to ensure involvement of homeland security partners in building a bottom-up approach of organization and response as it establishes national planning efforts. The National Council on Disability is an independent federal agency and is composed of 15 members appointed by the President. It provides advice to the President, Congress, and executive branch agencies to promote policies, programs, practices, and procedures that guarantee equal opportunity for all individuals with disabilities, regardless of the nature or severity of the disability and to empower individuals with disabilities to achieve economic self-sufficiency, independent living, and inclusion and integration into all aspects of society. involves these preparedness stakeholders in plans that define roles a responsibilities, exercises, and assessments of capabilities. 6 U.S.C. § 315. March 2008 half of the staff positions in the PPPA office were vacant and these positions were not filled until November 2008, according to FEMA officials. In addition, FEMA did not hire a permanent director for the PPPA office until October 2008. Reorganization of the National Exercise Program also brought about staffing changes, according to FEMA officials. Among other things, in the fall of 2008 FEMA hired a new director for the National Exercise Division—a position that had been vacant for several months, according to FEMA officials. Fig. 3 illustrates the organizational changes related to the National Exercise Program. Defining roles and responsibilities is a key step in developing the national preparedness system. While most key policies that define roles and responsibilities have been completed, 68 percent (49 of 72) of plans that operationalize such policies have not been completed. Lessons learned from Hurricane Katrina and emergency response exercises demonstrated the need for the development of complete policies and plans to address potential problems with stakeholders not understanding their roles and responsibilities in response to a catastrophic disaster. Although best practices for program management state that a program management plan is an essential tool for implementing a program, FEMA, in coordination with DHS and other federal entities, has not yet fully developed such a plan to help ensure the development and integration of policies and plans that define roles and responsibilities and planning processes for emergency response. Legislation and presidential directives call for the development of policies and plans that define roles and responsibilities, which is key to FEMA’s ability to develop the preparedness system. For example, the Homeland Security Act of 2002, as amended by the Post-Katrina Act, requires FEMA to consolidate existing federal government emergency response plans into a single, coordinated National Response Plan, now known as the National Response Framework. To develop the preparedness system, FEMA is to partner and coordinate with key stakeholders, including other DHS components as well as other federal, state, and local entities. For example, the Post-Katrina Act requires the FEMA Administrator, under the leadership of the Secretary of Homeland Security, to coordinate with other agencies and offices in DHS to take full advantage of their range of available resources. In addition, the Post-Katrina Act requires FEMA to coordinate with other federal departments and agencies and the National Advisory Council to develop and refine key national policy documents that broadly define roles and responsibilities, including the National Incident Management System (NIMS) and the NRF. Plans developed using established planning processes operationalize policy documents, such as the NRF, by providing additional details on the roles and responsibilities for each individual and organization that may be involved in responding to high-risk or catastrophic incidents. For example, during the planning process, FEMA may need to coordinate with other federal departments and agencies—such as the Department of Defense (DOD), Federal Bureau of Investigation, and other components in DHS—to define roles and responsibilities for responding to a chemical attack. Per the Post-Katrina Act, each federal agency with responsibilities under the NRF is responsible for developing its own operational plans. The act further requires the President to certify to selected committees of Congress on an annual basis that each federal agency has complied with statutory requirements in the development of its operational plan, including a requirement that the plan “be coordinated under a unified system with a common terminology, approach, and framework.” Although the Post-Katrina Act does not charge FEMA with developing or certifying the federal agency plans, FEMA is statutorily responsible for the basic architecture of the national preparedness system, in coordination with other federal departments and agencies, among others. This principle of integration and coordination is also embodied in Homeland Security Presidential Directive 8 Annex 1 (HSPD 8 Annex 1), which tasked DHS with developing a national planning system to develop and integrate plans that define preparedness roles and responsibilities, both horizontally across federal departments and agencies (i.e., integration of plans that have been developed by more than one federal department or agency) and vertically with state and local emergency response plans (i.e., integration of plans that have been developed by more than one level of government). Thus, the federal preparedness framework depends on DHS’s—and, in particular, FEMA’s—ability to coordinate with its federal department and agency partners to ensure that their plans are integrated in a way that avoids duplication of effort and confusion during an interagency response. DHS, FEMA, and other federal entities with a role in national preparedness have completed most of the key policies that broadly define roles and responsibilities and planning processes for developing more detailed emergency plans. Among the 50 policies that define roles and responsibilities or planning processes, 42 have been completed, 2 have been partially completed, and the remaining 6 are incomplete. A more detailed breakdown of the 50 policies shows that 46 define roles and responsibilities and 4 define planning processes for developing emergency plans. Forty of the 46 policies that define roles and responsibilities have been completed. Among the policies that have been completed, for example, DHS issued the revised NIMS in December 2008 to further clarify roles and responsibilities when multiagency, intergovernmental entities are involved in a response and to address, in part, the confusion about roles and responsibilities that resulted in a poor response to Hurricane Katrina. Key components of the NRF have also been completed, including the base NRF document, 15 Emergency Support Function Annexes, and 8 Support Annexes. However, other components of the NRF have not been completed, such as the 4 Partner Guides that are to provide abbreviated descriptions of the key roles and responsibilities of specific federal, state, local, and private sector and nongovernmental stakeholders under the NRF. Also uncompleted is the development of new Joint Field Office (JFO) guidance under the NRF which, according to FEMA officials, is to provide functional guidance for the organization and staffing of JFOs, as well as their establishment, operation, and demobilization. In addition, DHS has not completed the National Homeland Security Plan, which is to serve as an overarching strategic plan to guide national efforts to execute the National Strategy for Homeland Security. For details on, and the status of, all 46 policies that are to define roles and responsibilities, see table 4 in appendix II. In addition to the 46 policies that define roles and responsibilities, 4 other policies are to define planning processes for developing emergency plans. Two of these 4 policies have been completed. According to officials, FEMA issued planning guidance for its operational planners in March 2009 and will update this guidance each fiscal year. The other 2 policies that define planning processes have been partially completed—1 has been issued as an interim policy (Comprehensive Preparedness Guide 301) while the other has been drafted (Integrated Planning System) and is being used by federal interagency incident management planners although it has not been publicly released. Additional details describing these 4 policies and their status are included in table 5 in appendix II. While DHS, FEMA, and other federal entities with a role in national preparedness have taken action to develop and complete some plans that detail and operationalize roles and responsibilities for federal and nonfederal entities, these entities have not completed 68 percent of the plans required by existing legislation, presidential directives, and policy documents as of April 2009. Specifically, of the 72 plans we identified, 20 have been completed (28 percent), 3 have been partially completed (that is, an interim or draft plan has been produced—4 percent), and 49 (68 percent) have not been completed. Detailed plans supplement and operationalize key policy documents. Among the plans that have been completed, FEMA published the Pre-Scripted Mission Assignment Catalog in 2008, which defines roles and responsibilities for 236 mission assignment activities to be performed by federal government entities, at the direction of FEMA, to aid state and local jurisdictions during a response to a major disaster or an emergency. See table 6 in appendix II for additional details on the other 19 plans that have been completed. One of the three plans that FEMA has partially completed is the Federal Contingency Plan—New Madrid Seismic Zone Catastrophic Earthquake. This plan addresses major issues the federal government expects to encounter if a catastrophic earthquake occurs in the New Madrid Seismic Zone with no warning. FEMA published this plan in an interim form in June 2008 and intends to finalize it by May 2010. While FEMA has engaged in significant planning efforts regarding threats that are specific to certain regions, such as hurricanes and earthquakes, through its Catastrophic Disaster Planning Initiative, those planning efforts are ongoing and have not been concluded. See table 6 in appendix II for additional details on the other two plans that have been partially completed. Among the 49 plans that have not been completed are the NRF incident annexes for terrorism and cyber incidents as well as the NRF incident annex supplements for catastrophic disasters and mass evacuations. The NRF incident annexes and incident annex supplements are to address the roles and responsibilities and unique aspects of how the United States responds to broad incident types. In addition, operational plans for responding to the consolidated national planning scenarios, as called for in HSPD 8 Annex 1, remain outstanding. For additional details and the status of each of these plans, as well as other plans that are to define roles and responsibilities, see table 6 in appendix II. Developing plans to operationalize policies that define roles and responsibilities is one key to an effective response. According to DHS, effective response hinges upon well-trained leaders and responders who have invested in response preparedness, developed engaged partnerships, and are able to achieve shared objectives. Until outstanding policies and plans, especially those that are new, are completed, FEMA, in coordination with DHS and other federal departments and agencies, cannot provide associated training on such policies and plans, and, relatedly, cannot validate the new policies, plans, and training though exercises—the next step in the national preparedness system cycle. According to FEMA, certain plans that have yet to be completed are refinements of existing plans, and training has been provided on the existing plan. However, such existing training will need to be adapted and modified to enable response stakeholders to be trained on the revised plans. Incomplete policies and plans, especially those that are new, and the resulting lack of associated training and validation through exercises, increase the risk that response to an incident may be disjointed, delayed, or ineffective because stakeholders may not understand their roles and responsibilities. The issue of completing emergency policies and plans that define roles and responsibilities is not new. Prior to Hurricane Katrina, based on a summary of lessons learned from exercises conducted in fiscal year 2005, DHS determined that plans for specific incidents were potentially needed to clarify how the National Response Plan (now the NRF) would be implemented under several types of domestic scenarios, such as the threat of an improvised nuclear device detonation, large-scale biological events, and suicide bombings. In February 2006, the White House issued its report on lessons learned from the response to Hurricane Katrina and concluded that two of four critical deficiencies in the response involved policies and plans, or the lack thereof, that were to detail roles and responsibilities. Among other things, the report noted that federal departments and agencies were required to develop supporting operational plans and standard operating procedures for national response activities, but in almost all cases, these required plans and procedures were either nonexistent or still under development. In addition, the report stated that additional structural deficiencies in the national preparedness system included weak regional response planning and coordination structures. Further, in our September 2006 report on Hurricane Katrina, we recommended that FEMA develop detailed and robust operational implementation plans for the National Response Plan (now NRF) and its Catastrophic Incident Annex and Supplement in preparation for and response to future catastrophic disasters. In addition, in October 2006, the Post-Katrina Act required FEMA to develop prescripted mission assignments and each federal agency with NRF responsibilities to develop operational plans and corresponding capabilities in support of the NRF to ensure a coordinated federal response. In December 2007, the President signed HSPD 8 Annex 1, which called for the development of an integrated set of scenario-based response plans, including federal agency operational plans that are also needed to satisfy the Post-Katrina Act. More recently, in September 2008, the Homeland Security Advisory Council (HSAC) for DHS identified the work of strengthening the nation’s disaster response capabilities, which includes developing and publishing outstanding emergency response policies and plans that define roles and responsibilities for national preparedness, as 1 of the top 10 DHS challenges for the incoming presidential administration. In addition to the lessons learned from Hurricane Katrina, recent exercises have demonstrated the ongoing need for the development of complete policies and plans that define roles and responsibilities for national preparedness. For example, we reviewed 16 after-action reports for national or principal level or equivalent exercises that were conducted from 2005 to 2008, and all of the reports (16 of 16) called for further clarification of roles and responsibilities across federal departments and agencies or between federal and nonfederal organizations, as shown in the following examples. As a result of a February 2007 exercise that tested a joint response by federal agencies to an attack using improvised explosive devices, HSC determined that the federal government needed to further refine the delineation of roles and responsibilities for the Departments of Defense, Justice, and Homeland Security. The same exercise also identified the need for the federal government to increase outreach to governors in order to achieve a common understanding of federal and state governments’ respective roles and responsibilities during such an incident. Finally, this exercise demonstrated that policies and plans were needed to clearly articulate, prior to an incident occurring, the circumstances in which the use of military assets and resources for civilian response is appropriate. Not addressing these recommended corrective actions, among others, may result in less efficient and effective responses. In a September 2007 exercise, the HSC identified the need for senior federal officials to be rapidly informed of the possible courses of action available to them in responding to a major disaster or an emergency. To do so, HSC asserted that the senior federal officials’ respective organizations should have in place detailed plans to inform and execute the senior federal officials’ decisions, as such plans would serve to better integrate interagency response activities. HSC made this recommendation in September 2007, and it was acted on by the President through the establishment of HSPD 8 Annex 1 in December 2007, more than a year after the Post-Katrina Act called for the development of federal agency operational plans. FEMA has not established a program management plan, in coordination with DHS and other federal entities, to ensure the development and integration of outstanding policies and plans that are to define roles and responsibilities and planning processes. The Post-Katrina Act makes the FEMA Administrator, in coordination with other entities, responsible for the development of the national preparedness system. According to the National Preparedness Guidelines, the national preparedness system, in part, consists of the policies and plans that define roles and responsibilities and planning processes for developing emergency plans. Although the Post-Katrina Act requires federal agencies to develop their own operational plans, those plans are to be “coordinated under a unified system with a common terminology, approach, and framework.” This coordination and unification is central to FEMA’s mission as the lead agency in charge of national preparedness, and it requires that the policies and plans that have been called for are developed and integrated so that emergency response roles and responsibilities and planning processes are fully defined and implemented. Best practices for program management, established by the Project Management Institute in The Standard for Program Management, state that managing a program includes, among other things, (1) establishing clear and achievable objectives; (2) balancing the competing demands for quality, scope, time, and cost; and, (3) adapting the specifications, plans, and approach to the different concerns and expectations of the various stakeholders involved in the program’s projects. A key step in managing a program involves developing a program management plan, which is an approved document that defines how a program will be executed, monitored, and controlled. The program management plan defines the tactical means by which the program will be carried out. According to The Standard for Program Management, a program management plan should, among other things: identify the specific schedule of activities that need to be performed to complete and identify dependencies among policy and planning development activities; identify the types and quantities of resources required to perform, and amount of time needed to complete, all policy and planning development activities; analyze activity sequences, durations, resource requirements, and schedule constraints to create and update the policy and planning project schedules; and control for changes to the project schedules precipitated by outside forces. Because FEMA has not established, in coordination with DHS and other federal entities, such a plan to ensure the development and integration of outstanding policies and plans, it is unclear when the full set of policies and plans will be completed, and FEMA cannot determine whether it or other entities with policy and plan development responsibilities, such as DHS, are on schedule. Further, because FEMA cannot determine whether other entities with policy and plan development responsibilities are on schedule, it cannot determine when and how it will integrate into the national preparedness cycle the range of policies and plans required by legislation, presidential directives, and other policy. Based in part on our work, in February 2009 FEMA officials acknowledged that a program management plan should be established. Without active utilization of a program management plan, FEMA, in coordination with DHS and other federal entities, may experience unforeseen delays in completing its efforts to develop and integrate these policies and plans that define roles and responsibilities and planning processes. While FEMA, in coordination with DHS and other federal entities, may experience unanticipated or uncontrollable delays in developing outstanding policies and plans, it would be better positioned to identify the effect of those delays and assess measures to mitigate them with a program management plan in place and utilized. FEMA has developed guidance to implement the National Exercise Program; however, it faces challenges in meeting statutory and program requirements in conducting the program. These challenges have arisen because FEMA lacks procedures that detail how it will work with federal entities and monitor states to ensure these entities carry out program requirements. In addition, FEMA faces challenges in measuring the effectiveness of the program because the databases it uses to measure program performance are incomplete. Exercises are a key element of the national preparedness system. The purpose of the National Exercise Program is to test and improve the nation’s ability to prevent, prepare for, and respond to events such as terrorist attacks and natural and man-made disasters. To meet this purpose, exercises should test existing capabilities against desired, or target capabilities as well as verify and validate policies and plans that define roles and responsibilities. Developing and implementing the National Exercise Program is a difficult task because the magnitude of the effort involves coordinating with and relying on the cooperation of other DHS components such as the Coast Guard, numerous federal entities such as the Homeland Security Council (HSC)—which is responsible for coordinating federal interagency homeland security policy—and state governments, among others. This coordination is especially critical at the federal level because FEMA lacks the authority to compel federal agencies to comply with program requirements. At the state level, FEMA is able to use its grant programs to ensure that states follow program guidelines. Since 2007, FEMA has taken a number of actions to implement the National Exercise Program. Several of these actions are summarized in table 1. FEMA has also identified four tiers of exercises that comprise the National Exercise Program (see fig. 4). Among the exercises that involve federal interagency coordination are Tier I National Level Exercises, which are operations-based exercises that evaluate existing national plans and policies, in concert with federal and nonfederal entities; and Principal Level Exercises, which are discussion-based exercises among senior federal officials that examine emerging issues. Officials at all six states we visited cited actions taken by FEMA to implement the National Exercise Program as positive contributions to their exercise efforts. For example, officials in five of the six states indicated that HSEEP guidance was beneficial because it establishes consistency in how exercises are designed and conducted. State officials who were involved in a National Level Exercise indicated that it not only allowed them to test and validate local emergency response plans, but also gave them the opportunity to meet federal stakeholders as well as first responders from neighboring counties and cities, thus enabling them to establish working relationships before an event occurs. Similarly, exercise planners in New York noted that a benefit of FEMA’s regional training and exercise plan workshop was the positive relations that were developed with FEMA officials. In addition to actions by FEMA, the six states we visited have also taken actions to implement exercise programs (FEMA considers exercises conducted by states to be Tier IV exercises.) For example, exercise planners in Washington conducted a half-day senior leadership workshop for federal, state, and local officials in May 2008 that identified a number of issues related, among other things, to intergovernmental communications during an incident. Based on the workshop, officials in Washington identified corrective actions that could be taken. For example, they identified the need to increase postdisaster communications to help speed recovery efforts and said they were taking corrective actions to address this issue. In California, state exercise planners involved planners from other departments or other states in their design and planning workshops as a way to share information on exercise design and conduct, thereby increasing the interagency participation in their exercise program efforts. According to the workshop sponsors, participants benefited from the multidisciplinary participation in the conference. In Illinois, the state used real-world events, such as sporting events, to test or “exercise” response by local law enforcement and first responders. State exercise officials said these efforts were an effective supplement to their regular exercise program. While FEMA has taken actions to implement the National Exercise Program, it faces challenges in meeting statutory and program requirements in conducting and measuring the effectiveness of the program. First, FEMA does not have procedures in place to detail how it is to (1) work with other federal entities and (2) monitor states to help ensure that these entities promptly prepare after-action reports and track and implement corrective actions for federal- and state-level exercises. Second, the National Exercise Program’s ability to simulate a catastrophic event to strain, or “stress”, the preparedness system is limited. Third, FEMA lacks data to measure the effectiveness and progress of the National Exercise Program. FEMA must design the National Exercise Program to provide, among other things, for the prompt development of after-action reports and plans for quickly incorporating lessons learned into future operations. In addition, the Post-Katrina Act requires FEMA to establish a program to conduct remedial action tracking and long-term trend analyses. According to the implementation plan for the National Exercise Program, after-action reports for Tier I exercises (National Level Exercises or Principal Level Exercises) must be issued within 180 days, or 6 months, after the completion of an exercise, and the release of an after-action report should not be delayed to reach consensus on all issues identified during the exercise. FEMA executes Tier I exercises in coordination with HSC and others. Although the Post-Katrina Act does not give FEMA the authority to compel other federal entities to comply with the objectives of the National Exercise Program, it places responsibility for implementing the National Exercise Program on FEMA, in coordination with other appropriate federal agencies and other entities. Therefore, it is incumbent on FEMA to coordinate with HSC and other federal entities to better ensure that FEMA obtains the information it needs to meet its statutory responsibility to track corrective actions. However, FEMA has not ensured that after-action reports for Tier 1 exercises are issued in a prompt manner, nor has it tracked and documented implementation of corrective actions for such exercises. These challenges occurred, in part, because FEMA has not established procedures that detail how the agency will work with other federal entities to ensure National Exercise Program requirements are met. For example, FEMA conducted a Tier I National Level Exercise—Top Officials 4 (TOPOFF 4)—in October 2007, but as of February 2009, or more than 15 months later, FEMA had not yet issued the after-action report or tracked and implemented corrective actions. When an after-action report on an exercise is delayed and not provided to stakeholders, the “lessons learned” from the exercise diminish in importance, limiting stakeholders’ ability to make improvements in preparedness. In February 2009, FEMA officials stated that the draft after-action report for TOPOFF 4 had been written, and various departments and agencies have approved the report, but it was not approved by DHS prior to the change in administration related to the 56th Presidential Inauguration. According to FEMA officials, a complicating factor in releasing the report is the political sensitivity of information, as those who write after-action reports may face internal pressure not to identify weaknesses in an entity’s emergency preparedness. As a result of these delays, stakeholders may not be able to promptly make improvements in preparedness. FEMA has also had limited success in ensuring that program requirements for the National Exercise Program have been followed for Principal Level Exercises. HSEEP guidance and the implementation plan state that the results of exercises should be documented and corrective action responsibilities assigned and tracked. Specifically, HSEEP guidance calls for organizations to (1) complete an after-action report with an improvement plan and (2) track corrective actions to ensure that they are implemented. Based on our review of four after-action reports issued by HSC between April 2007 and August 2008, three did not meet program requirements. In one case, the after-action report was drafted but not finalized. In two other cases, the report did not identify officials who were responsible for ensuring corrective actions were implemented. For example, the after-action report for the February 2008 Principal Level Exercise on pandemic influenza stated that the integration of strategic communications and policy remains difficult and should be addressed by an interagency group. However, the after-action report did not identify a department or an agency official who was to be responsible for ensuring that this corrective action was implemented. FEMA officials said they relied on the National Exercise Program Implementation Plan as the guiding policy for HSC’s responsibilities for documenting after-action reports and tracking and resolving corrective actions. However, the plan does not describe a procedure for ensuring that these requirements are met by HSC. Based in part on our review of Principal Level Exercises, FEMA has subsequently taken action to identity officials who are responsible for nearly all of the corrective actions outlined in these after- action reports. The implementation plan for the National Exercise Program also requires entities responsible for Tier I level exercises, including Principal Level Exercises, to ensure that corrective actions are resolved. However, HSC did not follow the required corrective action process. According to HSC staff, the council does not use the CAP system or another tracking procedure for determining whether corrective actions were implemented. Rather, it delegates the responsibility of taking corrective actions to the appropriate agencies or departments. FEMA officials agreed that tracking corrective actions for Principal Level Exercises is problematic and impairs their ability to fulfill FEMA’s statutory obligation to track corrective actions from exercises. However, they stated that they do not have the authority to direct HSC or other federal entities to track corrective actions and report this information to FEMA. FEMA officials said that they rely on the National Exercise Program Implementation Plan to set forth HSC’s responsibilities for documenting after-action reports and tracking and resolving corrective actions. Although the Post-Katrina Act does not give FEMA the authority to compel other federal entities to comply with the objectives of the National Exercise Program, the act places responsibility for implementing the National Exercise Program on FEMA, in coordination with other appropriate federal entities. Therefore, it is incumbent on FEMA to coordinate with HSC and other federal entities to better ensure that FEMA obtains the information it needs to meet its statutory responsibility to track corrective actions. In this regard, the implementation plan for the National Exercise Program does not set forth FEMA’s statutory responsibility to track corrective actions, nor does it require federal departments and agencies to report corrective action information to FEMA. On the contrary, the implementation plan provides that departments and agencies “may submit issues to the DHS Corrective Action Program (DHS CAP) through the web-based DHS CAP system,” but does not instruct them to do so or to otherwise provide FEMA with corrective action tracking information that would enable FEMA to fulfill its statutory responsibilities under the Post-Katrina Act. Therefore, the implementation plan lacks procedures that call on HSC and other federal entities to report corrective action information to FEMA. FEMA’s inability to fully track and analyze areas that need improvement is also due, in part, to its lack of an effective internal controls environment. GAO’s Standards for Internal Controls in the Federal Government state that an effective internal control environment is a key method to help agency managers achieve program objectives and enhance their ability to address weaknesses. The standards state, among other things, that agencies should have policies and procedures for ensuring that the findings of audits and reviews are promptly resolved. The standards also state that internal controls should generally be designed to assure that ongoing monitoring occurs. Developing procedures for working with federal entities, such as HSC, to help ensure that corrective actions are tracked, implemented, and reported to FEMA would strengthen FEMA’s ability to determine emergency management areas that need improvement. Lessons learned from Hurricane Katrina identified similar concerns with tracking corrective actions from exercises. For example, in February 2006 the White House report on Hurricane Katrina stated that “too often, after- action reports for exercises and real-world incidents highlight the same problems that do not get fixed”. According to the report, DHS should ensure that all federal and state entities are accountable for the timely implementation of remedial actions in response to lessons learned. According to the report, the success of the preparedness system depends, in part, on feedback mechanisms for tracking corrective actions. When federal entities carry out processes that are incompatible with FEMA’s responsibilities for tracking corrective actions, FEMA managers do not have the necessary data to measure progress, identify gaps in preparedness, and track corrective actions—key objectives of the National Exercise Program. Similar to the problems we found with Principal Level Exercises, we identified weaknesses in the way in which selected states prepared and submitted after-action reports to FEMA. Among other things, HSEEP requires that exercise program managers prepare after action-reports that include improvement plans to identify corrective actions, track whether the actions were implemented, and continually monitor and review corrective actions as part of an organizational corrective action program. Exercise program managers are to submit these after-action reports to FEMA through its Secure Portal—a FEMA database containing after- action reports. Of the six states we visited, (1) none systematically recorded or submitted after-action reports to FEMA, (2) only one had improvement plans in all of its after-action reports and only one said it had a corrective action tracking program, and (3) none used a capabilities- based approach in all of their exercises. The following are additional details on each of these issues. HSGP guidance requires exercise program managers to submit after- action reports within 60 days following the completion of an exercise to FEMA through its Secure Portal. Although the portal has been operational for about 5 years, FEMA did not have procedures in place to fully monitor state actions and to ensure that this occurred. While 3 of 44 after-action reports provided by officials from the six states we visited were submitted to FEMA in the requisite area of the portal, these reports all came from one state. The remaining five states did not submit any after-action reports through the portal. Officials from these five states cited technical difficulties, lack of staff resources, or unclear guidance from FEMA as reasons why after-action reports were not submitted to the portal. FEMA is aware that the portal contains incomplete information, noting in its quarterly newsletter that “The Secure Portal serves as the repository for after action reports and improvement plans; however, postings have been inconsistent. At times, it has been difficult to locate After Action Reports as many are posted in draft form and never finalized and posted outside the respective State folder.” In February 2009 FEMA announced, a National Exercise Division Exercise Support System that is an online tool for facilitating exercise planning to replace the Secure Portal as a repository for after-action reports. HSEEP requires that entities include an improvement plan as part of their after-action reports and that they have a corrective action program. While each of the six states we visited had produced at least one draft after-action report that included an improvement plan, only one state included an improvement plan in all of its reports. Fifteen of the 44 after-action reports we reviewed had an improvement plan. In addition, only one state had a corrective action program that tracked whether corrective actions were implemented. Officials from one state attributed the lack of a corrective action program to competing priorities. Specifically, states are involved in many exercises and officials are more likely to place the priority on designing and conducting of the next exercise than on tracking corrective actions from prior exercises. Another HSEEP requirement calls for exercises to be designed and conducted using a capability-based approach. Doing so would help FEMA analyze whether gaps in capability have narrowed and improvements in capabilities have occurred from the use of grant funds by states. However, 20 of the 44 after-action reports provided by officials in the six states we visited used target capabilities, while the remaining 24 did not. According to officials from three of the six states we visited, not all exercise participants have a good understanding of target capabilities and how they should be used in the design, conduct, and evaluation of exercises. For example, exercise officials from one state said their state does not use target capabilities because it has its own set of assessment standards. FEMA’s lack of procedures for monitoring states to ensure compliance with HSEEP requirements contributed to limited adherence to such requirements in the states we visited. As discussed earlier in this report, internal control standards call for (1) an effective internal control environment to help agency managers achieve program objectives and enhance their ability to address weaknesses, (2) agencies to have policies and procedures for ensuring that the findings of audits and reviews are promptly resolved, and (3) internal controls to generally be designed to assure that ongoing monitoring occurs. FEMA officials in the National Preparedness Directorate told us they have a process for monitoring HSEEP compliance by, among other things, having FEMA regional exercise support program managers discuss HSEEP compliance with state exercise program officials at planning conferences or during grant monitoring discussions. While discussing HSEEP requirements at annual conferences may enhance the awareness of state officials about requirements for HSEEP compliance, this does not track compliance. In addition, officials from FEMA’s Grant Programs Directorate said they do not monitor states’ compliance with HSEEP requirements. For example, the grant monitoring reports for the six states we visited did not address whether the states were in compliance with HSEEP requirements. Such reports are based, in part, on a checklist of items that officials use to monitor compliance with grant requirements. However, FEMA’s checklist does not include specific items, such as compliance with HSEEP requirements, as called for by HSGP guidance. States’ noncompliance with HSEEP hinders their ability to systematically track corrective actions and assess capabilities. This in turn impacts FEMA’s ability to measure progress of the National Exercise Program. Having procedures in place to monitor actions by states to ensure compliance with HSEEP requirements would assist FEMA in obtaining more complete data about the results of exercises and corrective actions taken to systematically evaluate readiness through the National Exercise Program, as required by the Post-Katrina Act. The Post-Katrina Act requires FEMA to stress the preparedness system through the National Exercise Program to evaluate preparedness for a catastrophic event. The National Exercise Program Implementation Plan identifies domestic incident management for catastrophic events as the principal focus of the National Exercise Program. According to the Tier I exercise cycle established in the implementation plan, FEMA plans to test for a catastrophic domestic nonterrorism event in fiscal year 2010. However, FEMA’s ability to meet this testing requirement is limited by three factors: (1) the lack of key planning documents, (2) exercise artificiality, and (3) limited coordination with groups that have expertise in populations with special needs. First, the effectiveness of exercises is based, in part, on the degree to which plans that define roles and responsibilities have been developed. The fact that key planning documents for response to a catastrophic incident, such as the supplement to the catastrophic incident annex and regional response plans, have not yet been completed means that the National Exercise Program will have difficulty in designing exercises that test whether the plans are understood and executed effectively by stakeholders.88, 89 As we described earlier in this report, while DHS and FEMA are working on these plans, it is unclear when they will complete the plans. According to the former director of FEMA’s National Preparedness Directorate, FEMA’s ability to design and conduct exercises that evaluate a response to a catastrophic incident is limited by the fact that plans such as those described above have yet to be developed. However, the official indicated that FEMA was taking preliminary actions to build its capacity to conduct such exercises through regional catastrophic planning initiatives. The supplement to the catastrophic incident annex is required by the Post-Katrina Act. 6 U.S.C. § 319(b)(2)(C). While a version of the supplement was written before the act, an updated supplement has yet to be published. Regional plans have been drafted; however, according to FEMA officials, the plans are not consistent from one region to another because regions developed plans without any guidance from FEMA headquarters. Exercise Program’s Implementation Plan, such participation from other federal stakeholders is not required for Tier II exercises such as the one held in May 2008. Third, another challenge in creating exercises that stress the preparedness system and simulate real-world conditions is finding ways to test response capabilities for populations with special needs. To address some of the problems experienced in Hurricane Katrina in dealing with populations with special needs, such as residents in nursing homes, the Post-Katrina Act, as amended by the 9/11 Act, called on FEMA to design exercises to address the unique requirements of populations with special needs, including the elderly, and to coordinate the National Exercise Program with the National Council on Disability, among other entities. In TOPOFF 4, FEMA integrated specific objectives for special needs populations in the Oregon venue, according to FEMA officials. For example, according to FEMA officials, FEMA utilized special needs actors to enhance realism. However, HSEEP guidance does not address special needs populations. Further, while FEMA has corresponded with the Council on Disability, council officials believe that FEMA could do more to ensure that exercises are designed to address the unique requirements of populations with special needs. For example, council officials stated that the council was not involved in the design and planning for TOPOFF 4 or the May 2008 Tier II exercise. According to officials from the National Preparedness Directorate, it coordinated with FEMA's Special Needs Office to integrate special needs population objectives into exercises. FEMA officials agree that special needs populations should be included in exercises and they said that they will redouble their efforts to do so. However, FEMA officials also noted that some exercises, for example, the National Level Exercise planned in July 2009, may not involve special needs populations because the point of such exercises is to prevent a terrorist attack, rather than to test response and recovery efforts. Enhancing coordination with the National Council on Disability could improve FEMA’s ability to ensure that key issues concerning populations with special needs are addressed in the design and conduct of exercises. The limitations the National Exercise Program faces in designing approaches that stress the preparedness system highlight the difficulty in validating whether roles and responsibilities are well understood and major gaps in capabilities remain for responding to and recovering from catastrophic events. The 2006 White House report on the federal response to Katrina concluded that the “national preparedness system must be oriented toward greater challenges. We must not shy away from creating scenarios that stress the current system of response to the breaking point……Until we meet the standards set by the most demanding scenarios, we should not consider ourselves adequately prepared.” In 2006, we reported that effective exercises should involve scenarios that stress responders with the highest degree of realism possible, even to the breaking point if possible. Exercises that stress the preparedness system in a realistic way are key to testing the prospective reliability of a response and determining whether plans have accounted for potential breakdowns with relatively greater consequences. In February 2009, we met with FEMA officials to discuss this issue and they agreed that developing exercises to the point of system failure is a valid objective; however, they described several factors that may limit their ability to do so. For example, FEMA officials told us that exercising to the “breaking point” requires significant resources that under current, and likely future funding streams, are unlikely to be available. We agree with FEMA that these are important considerations; however, these considerations, in part, are addressed through the implementation plan for the National Exercise Program, which describes a 5-year schedule of exercises to give federal departments and agencies lead time to budget for participation in such events. The three databases that FEMA uses to measure the effectiveness and progress of the National Exercise Program have incomplete data. FEMA uses (1) the NEXS system to identify DHS-funded exercises, (2) the FEMA Secure Portal as a repository for DHS funded exercise after-action reports, and (3) the CAP system as a tool for tracking corrective actions. The following provides details on problems with the reliability of each of these databases. FEMA calls on state entities to use the NEXS system to schedule all exercises, and one of the performance measures that FEMA uses to assess and report on the performance of the National Exercise Program is the number of DHS-funded state exercises that occur per year. However, 26 of the 44 after-action reports we reviewed did not have the exercise entered into the NEXS system. Incomplete NEXS system data limit FEMA’s ability to accurately report on the number of DHS-funded exercises. Furthermore, while FEMA created the NEXS system to schedule, synchronize, and avoid conflicts in all national, federal, state, and local exercises, it cannot do so with incomplete data. When we discussed this problem with state and FEMA officials, they agreed that the NEXS system did not contain a comprehensive list of all state and local exercises supported by HSGP funds. For example, when we asked FEMA for a complete list of all exercises to be conducted under the National Exercise Program, the agency could not produce such a list. Standards for Internal Control in the Federal Government state that program managers need data to determine whether they are meeting their performance targets and that controls should be designed to validate the integrity of organizational performance measures and indicators. FEMA has initiated actions to validate the accuracy of data used in the NEXS system by using training and exercise plan workshops with states to determine what exercises states have scheduled. We agree that the training and exercise plan workshops are a good starting point for verifying the completeness of NEXS; however, when we attended a training and exercise plan workshop, we were told by FEMA officials that not all federal agencies or local entities participated, thus, all exercises were not discussed at the workshop. In addition, FEMA officials recognize that the workshops alone do not ensure that the NEXS database is complete and accurate. In the absence of systematic and comprehensive information on the number of federally funded exercises, FEMA cannot measure its progress in implementing the National Exercise Program. A second database used by FEMA is the Secure Portal—the primary database that FEMA uses to measure the degree to which states comply with HSEEP. Even though FEMA requires state exercise program managers to place their after-action reports in the Secure Portal when federal grant funds are used to support the exercise, this requirement was not completely met by any of the six states we visited. In addition, although state and FEMA officials agreed that the Secure Portal does not contain all state exercise after-action reports, FEMA uses information from the portal to assess and report on the performance of the National Exercise Program, including the measure that FEMA uses to assess the percent of DHS-funded exercises demonstrating the use of HSEEP guidance. Since the Secure Portal contains incomplete information and neither FEMA nor the six states we visited have controls to ensure that all state-level exercise after- action reports are uploaded to the Secure Portal, this measure may not accurately reflect the percent of DHS-funded exercises that demonstrate the use of HSEEP guidance. Third, although federal entities involved in Tier 1 exercises are encouraged to use FEMA’s CAP system, it does not contain all corrective actions from such entities. According to FEMA officials, the CAP system was designed to capture all relevant and necessary information related to the implementation of corrective actions. However, the HSC did not use the CAP system to track corrective actions. The problem with having incomplete data in the CAP system is that FEMA uses information from the system to measure the percentage of corrective actions that have been implemented as one of its performance measures. A key reason for this problem is that FEMA has not established procedures to ensure that the information in the CAP system is complete. For example, the implementation plan does not require the nine federal departments and agencies that are signatories to the implementation plan to use the CAP system. Instead, FEMA strongly urges stakeholders to use the system, but the decision to do so is discretionary. FEMA officials cited the tension between requiring entities to use the CAP system and providing enough flexibility to those entities to carry out their programs as a reason for not making the use of the CAP system a requirement. Nonetheless, entities could submit a report to FEMA on the status of their corrective actions resulting from such exercises. Finally, the CAP system does not include corrective actions from real- world incidents and FEMA has not established requirements or guidelines for agencies to do so. As a result, FEMA is unable to meet Post-Katrina Act requirements for conducting long-term trend analyses of corrective actions that include real-world events. The Presidential Inaugural Ceremony held in Washington D.C. on January 20, 2009, provides an example of the importance of tracking corrective actions for real-world events. During the event, problems with managing crowds prevented a large number of ticket holders from reaching their designated area to observe the inauguration ceremony. According to the Joint Congressional Committee on Inaugural Ceremonies, a complete examination will take place to provide a foundation of lessons learned for future inaugural planners, so that they can avoid similar problems in the future. FEMA officials agreed that the CAP system could be used to track corrective actions from real-world events, and in February 2009 they indicated that developing procedures to do so would aid their ability to conduct long- term trend analyses of real-world events as required by the Post-Katrina Act. FEMA has taken initial actions to collect information on state preparedness capabilities and develop a comprehensive assessment system for assessing capabilities at all levels of government, but faces methodological and coordination challenges in completing the system. Assessing and reporting on national preparedness is a long-standing and complex effort that presents methodological, integration, and coordination challenges. Effectively addressing these challenges requires that FEMA take a measured and planned approach; however, FEMA’s project management plan does not fully identify the numerous program elements and how and when they will be developed and integrated. The Post-Katrina Act requires that FEMA establish a comprehensive assessment system to assess the nation’s capabilities and overall preparedness for preventing, responding to, and recovering from natural and man-made disasters. The act also requires that FEMA collect information on state capability levels and report on federal preparedness to Congress, including, among other things, the results of the comprehensive assessment system. In response to these requirements, FEMA established guidance for reporting on state preparedness and created the Office of Preparedness Policy, Planning and Analysis (PPPA) to develop and implement a new assessment approach that considers past efforts and integrates ongoing assessment efforts. FEMA plans to integrate the state preparedness reports—along with a variety of existing assessment efforts and data sources—into the new comprehensive system it is establishing. In addition, it is considering the historical experiences and lessons learned from prior assessment efforts in developing the new system. FEMA has also made progress in collecting information for federal and state reporting. In January 2009, FEMA issued its first federal preparedness report. The Post-Katrina Act requires that FEMA, in coordination with the heads of appropriate federal agencies, submit a federal preparedness report to Congress beginning in October 2007 and annually thereafter, which is to include, among other things, the results of the comprehensive assessment. In addition, states, territories, and the District of Columbia completed and submitted their first state preparedness reports to FEMA in the spring of 2008—a total of 56 reports from the 56 jurisdictions receiving homeland security grant funding. FEMA officials said they prepared summaries of the 56 reports and provided the summaries to FEMA’s regional offices. In addition, in November 2008, FEMA issued guidance for the 2008 state preparedness reports, which grantees are to submit to FEMA by March 2009. FEMA faces methodological challenges with the four assessment systems it plans to use as the basis for the new system and has not determined how to overcome problems faced in historical assessments. The challenges FEMA faces reflect the lack of guidance from PPPA in how the assessment system will comprehensively inform and incorporate feedback from other elements of the National Preparedness System and information from a variety of other data sources. Finally, FEMA faces challenges in coordinating with state, local, and federal stakeholders in developing and implementing the system and reporting on its results. In December 2008, FEMA provided us with a project management plan outlining efforts to establish the comprehensive assessment system by May 2010 to “function as a central repository for national preparedness data.” The system “will integrate data from prior reports and legacy assessment systems.” To establish the system, FEMA plans to administer a Web-based survey to all states and territories in the summer of 2009 to assess capabilities using the 37 target capabilities. FEMA plans to use a Web-based system known as the National Incident Management System Compliance Assessment Support Tool (NIMSCAST) to administer the capability assessment survey. In addition, FEMA noted that NIMSCAST will serve as the technical foundation for the comprehensive assessment system, and that the system is used by all states and territories as well as by 18,000 local and tribal entities, which helps to mitigate challenges FEMA faces in coordinating with stakeholders in developing and implementing the comprehensive assessment system. However, FEMA faces methodological challenges with regard to (1) differences in data available, (2) variations in reporting structures across states, and (3) variations in the level of detail within data sources requiring subjective interpretation, as summarized in table 2 below. Additional information regarding these assessments is outlined in appendix III. In addition to the methodological challenges in its current approach to assessing capabilities, previous efforts at assessing capabilities experienced challenges and have been discontinued. The National Preparedness System was discontinued by DHS officials because it was time-consuming and did not produce meaningful data, according to FEMA officials. The system was pilot tested in 10 states and according to budget documentation, FEMA spent nearly $15 million in total for 2006, 2007, and 2008 on the system before it was discontinued. The Pilot Capability Assessment was labor-intensive and did not generate meaningful data, according to FEMA officials. This assessment, piloted in six states, was intended to measure jurisdictions’ progress in achieving needed target capabilities. Because it was only piloted, FEMA did not generate meaningful preparedness information from the data collected, according to officials. The Capability Assessment for Readiness, which was proposed as a one-time nationwide assessment of capabilities, lacked controls for validating the accuracy of self-reported assessment data. The assessment was conducted in 1997 but concerns regarding self- reporting and the lack of controls for validating information reported by states limited the reliability and, therefore, the value of the data, according to the DHS Inspector General. Additional information regarding these efforts is outlined in appendix III. FEMA has not established an approach for how information and data from different sources will be integrated into the comprehensive assessment system. FEMA officials have established a charter between the National Preparedness Directorate and Grant Programs Directorate to coordinate preparedness efforts related to the Cost to Capability Initiative and refinement of the target capabilities. FEMA has also begun sharing information between staff involved in developing the assessment system and staff involved in other elements of the National Preparedness System. For example, staff from the National Exercise Program said they shared information on their exercise efforts with staff developing the comprehensive assessment system and FEMA officials said they established a working group of officials from other federal agencies to communicate efforts to develop the comprehensive assessment system. In March 2009, FEMA officials acknowledged that they had not finalized a charter for this working group to outline the specific actions that the working group will undertake to develop the comprehensive assessment system. FEMA explained that this working group will (1) identify existing sources of data related to preparedness plans, organization, equipment, training, and exercises; (2) vet the relevancy of each data source for assessments; (3) identify data gaps and redundancies; and, (4) develop recommendations for streamlining data collection and reporting. However, FEMA has not established an approach for integrating information and data from other stakeholders, including grantees and other FEMA divisions such as the Disaster Operations Directorate. In October 2008, FEMA officials said they also plan to consider or incorporate into the new system a multitude of other data and analysis sources within and outside of DHS, such as FEMA’s Biannual Strategy Implementation Reports; Homeland Security Grant Program Investment Justification; and Tactical Interoperable Communications Plan Scorecards. In addition, FEMA plans to use the CAP system and LLIS to inform the comprehensive assessment system. In February 2009, FEMA officials further explained that they plan to rely on three indicators of preparedness to develop the comprehensive assessment system: (1) state and federal preparedness reports, which are required to use target capabilities; (2) the results of exercise corrective action findings; and (3) operational plans outlining specific operational requirements for all levels of government. However, FEMA has not established an approach for how the three indicators of preparedness will be collected and developed into reporting mechanisms that meet Post-Katrina Act requirements for the comprehensive assessment system. In its first federal preparedness report, FEMA acknowledged that its efforts to evaluate and improve preparedness are the least mature elements of the national preparedness system because these efforts are composed of a wide range of systems and approaches with varying levels of integration. Given the relative immaturity of FEMA’s evaluation and improvement efforts, without an approach for integrating its comprehensive assessment system efforts, FEMA faces increased risks that inconsistencies may occur or that data and information are not shared, limiting FEMA’s ability to fulfill the requirements of the Post-Katrina Act for developing the system. In addition to methodological and coordination challenges in developing and completing the comprehensive assessment system, FEMA faces coordination challenges in establishing quantifiable metrics for target capabilities outlined in the Target Capabilities List. Establishing quantifiable metrics for target capabilities is a prerequisite to developing assessment data that can be compared across all levels of government. At the time of our review, FEMA was in the process of refining the target capabilities to make them more measurable and to provide local and state jurisdictions with additional guidance on the levels of capability they need. FEMA plans to develop quantifiable metrics—or performance objectives— for each of the 37 target capabilities that are to outline specific capability “targets” that jurisdictions, including but not limited to cities, of varying size should strive to meet. FEMA plans to complete quantifiable metrics for all 37 target capabilities by the end of 2010. As of February 2009, FEMA noted that 6 of the 37 capabilities were undergoing stakeholder review and that FEMA planned to develop quantifiable metrics for a total of 12 capabilities during the 2009 calendar year. However, as of March 2008, FEMA had not developed milestones for completing quantifiable metrics for the remaining 25 target capabilities. Cognizant of the fact that there is not a “one size fits all” approach to preparedness, FEMA also plans to develop performance classes for each target capability in order to account for differences in levels of preparedness across jurisdictions of varying size and risk. FEMA plans to incorporate the new performance objectives and performance classes into the comprehensive assessment system, the federal preparedness report, and the guidance for the state preparedness reports, but has not established a time frame for doing this. FEMA recognizes the need to coordinate with federal, state, and local stakeholders and ensure that their views are effectively integrated in the development of the metrics, but FEMA historically has faced challenges in coordinating with stakeholders. For example, as we reported in June 2008, FEMA’s efforts to coordinate with stakeholders in developing the NRF were inconsistent and needed to be improved. In addition, such coordination can be time consuming. For example, in July 2005 we reported on DHS’s prior effort to develop a tiered system of metrics based on population density and critical infrastructure in order to (1) assign jurisdictions responsibility for developing and maintaining target capability levels and (2) use the metrics to implement a national “balanced investment program” (with the purpose of directing federal preparedness assistance to the highest priority capability gaps) for national preparedness capabilities. DHS scheduled this system to be developed and completed by October 2008. At the time of our review, these efforts to develop quantifiable metrics for target capabilities were not complete, illustrating the fact that developing metrics in coordination with a variety of stakeholders can take longer than anticipated. In developing the quantifiable capability metrics, FEMA officials told us that they plan to conduct extensive coordination with stakeholders that will entail conducting stakeholder workshops in all 10 FEMA regions and coordinating with all federal agencies with lead and supporting responsibility for Emergency Support Function (ESF) activities associated with each of the 37 target capabilities. Officials said they also plan on briefing the National Advisory Council and the National Council on Disability, and soliciting public comment on the draft quantifiable metrics for each target capability. One of FEMA’s coordination efforts—working with nonfederal stakeholders and federal agencies responsible for ESF activities— illustrates the large number of stakeholders with whom FEMA plans to coordinate in developing quantifiable metrics for the target capabilities. FEMA also plans to post each revised capability to the Federal Register for comment. With respect to federal agency coordination, FEMA plans to coordinate with each federal agency that has lead and supporting responsibility for ESF activities associated with each of the 37 target capabilities in developing quantifiable capability metrics. For example, for the medical surge capability, the Department of Health and Human Services (HHS) is the primary federal agency responsible for coordinating necessary medical surge capabilities needed during a disaster to provide triage and medical care services. In addition to HHS, 15 other federal agencies and the American Red Cross are designated as supporting agencies and organizations for medical surge capabilities. Coordinating with other federal agencies responsible for ESF activities in developing quantifiable capability metrics would likely entail time, effort, and unforeseen risks. For example, in September 2008 we reported on the risks that FEMA faces in coordinating with external stakeholders, namely the American Red Cross, in collecting and integrating preparedness information necessary to develop the comprehensive assessment system. In that report, we recommended that FEMA take steps to better incorporate information from voluntary organizations related to sheltering and feeding capabilities, which are elements of the mass care target capability, and noted that a comprehensive assessment of the nation’s capabilities should account as fully as possible for the voluntary organizations’ capabilities in mass care. FEMA disagreed with the recommendation, noting that it cannot control the resources of nonprofit and private organizations. In response, we (1) noted that taking steps to assess capabilities more fully does not require controlling these resources, but rather cooperatively obtaining and sharing information and (2) reiterated that such efforts are important for assessing the nation’s prevention capabilities and overall preparedness. FEMA’s efforts to collect information needed to draft and issue the first federal preparedness report, required by the Post-Katrina Act, also reflect the coordination challenges FEMA faces in implementing the comprehensive assessment system. The Post-Katrina Act requires that FEMA, in coordination with the heads of appropriate federal agencies, submit a federal preparedness report to Congress beginning in October 2007 and annually thereafter, which is to include, among other things, the results of the comprehensive assessment. FEMA issued the first federal preparedness report in January 2009. In response to our comment that the draft report had been under review for 8 months between March 2008 and November 2008, FEMA noted that after completing a review of the report by FEMA and DHS, the report was submitted to the Office of Management and Budget to disseminate to all federal departments and agencies for review and comment. Officials explained that in developing the report, they faced challenges in obtaining information and data from federal agencies because of bureaucratic obstacles for collecting information and also faced challenges in analyzing information from multiple sources. FEMA officials said they may develop a National Preparedness Report to combine two Post-Katrina Act reporting requirements—the requirement for an annual federal preparedness report and an annual catastrophic resources report—and to include information from state preparedness reports as part of this consolidated report, which they tentatively plan to issue in the Spring of 2009. Despite the methodological and coordination challenges associated with developing a new comprehensive assessment system and establishing related quantifiable metrics for target capabilities, FEMA has not developed an approach that addresses program risks as part of its project management plan for how it will develop the comprehensive assessment system. While FEMA has developed a project management plan for completing the comprehensive assessment system by 2010, the lack of (1) milestones for establishing quantifiable metrics for all 37 target capabilities and (2) specific actions for how FEMA will integrate preparedness information to develop the system, coupled with the (3) the lack of risk assessment information for the system raises questions about FEMA’s ability to establish the system in accordance with its anticipated 2010 completion date. FEMA has described several steps for completing an effective comprehensive assessment system that include developing methodologies to translate the information from the assessments FEMA has identified into a target capabilities-based framework and integrating necessary preparedness information from disparate and not necessarily comparable sources such as state preparedness reports. FEMA also plans to coordinate with relevant stakeholders to refine the target capabilities. However, FEMA’s plan does not outline specific actions it plans to take to do so. Certain factors, such as challenges in coordinating with stakeholders or difficulties in obtaining necessary data, could affect FEMA’s ability to implement the comprehensive assessment system. Best practices for project management established by the Project Management Institute state that managing a project involves project risk management, which serves to increase the probability and impact of positive events, and decrease the probability and impact of events adverse to the project. Project risk management entails determining which risks might affect a project, prioritizing risks for further analysis by assessing their probability of occurrence, and developing actions to reduce threats to the project. Other practices include (1) establishing clear and achievable objectives; (2) balancing the competing demands for quality, scope, time, and cost; (3) adapting the specifications, plans, and approach to the different concerns and expectations of the various stakeholders involved in the project; and (4) developing milestone dates to identify points throughout the project to reassess efforts underway to determine whether project changes are necessary. FEMA has demonstrated its awareness of the value of these practices. For example, in planning another project—an effort to transition the FEMA Secure Portal to an alternate information technology platform known as the National Exercise Division Exercise Support System —FEMA identified key elements such as phases, milestones, and risks that could affect the project goals. Furthermore, a risk assessment could help FEMA define the specific actions to take to complete the comprehensive assessment system, anticipate potential delays in completing its efforts to refine the target capabilities by 2010, and deal with the associated risks in its efforts to do so, such as the time it takes to coordinate with stakeholders. Information from a risk assessment could also enhance FEMA’s ability to coordinate with federal agencies to obtain preparedness information needed to produce a timely annual federal preparedness report and catastrophic resources report. Until FEMA assesses ways to mitigate the risks associated with its capability assessment efforts, it will be difficult for FEMA to provide reasonable assurance that it can produce a comprehensive assessment system that (1) fulfills the requirements of the Post-Katrina Act and in the long term, (2) informs decisions related to improving national preparedness. FEMA’s National Preparedness Directorate does not have a strategic plan for implementing the national preparedness system. The complexity and difficulty of implementing the national preparedness system, which we describe earlier in this report, underscore the importance of strategic planning. The six desirable characteristics of a national strategy can help the National Preparedness Directorate in developing a strategic plan. While FEMA has recognized that its components need to develop strategic plans that detail program goals, objectives, and strategies, FEMA’s National Preparedness Directorate (Preparedness Directorate) has not yet developed such a plan for the national preparedness system. In January 2008, FEMA issued its agencywide strategic plan, which set a common direction for its components in carrying out their responsibilities in preparedness, response, and recovery programs. While a Preparedness Directorate official acknowledged that the Preparedness Directorate does not have a strategic plan (this is the responsibility of FEMA’s Office of Preparedness Policy, Planning and Analysis), the official said the Post-Katrina Act provides a roadmap that contains the preparedness strategy and FEMA uses an annual operating plan (in draft form at the time of our review) to guide the directorate’s approach for implementing the national preparedness system. Although the Post-Katrina Act and the directorate’s draft annual operating plan outline certain elements of a strategy, such as the directorate’s vision, mission, and goals, they do not include several other desirable characteristics of a strategic plan, such as a discussion of how the directorate will (1) measure its progress in developing the national preparedness system, (2) address risk as it relates to preparedness activities, (3) coordinate with its preparedness stakeholders in developing and carrying out the various elements of the national preparedness system, and (4) integrate the elements of the national preparedness system. For example, FEMA has not included information on performance measures for meeting one of the objectives outlined in the operating plan—to support an integrated planning system for the federal preparedness-related agencies that links to regional, state, and local planning activities. The operating plan also does not define the problem or assess the risks that FEMA’s national preparedness program faces. Specifically, it does not describe the threats, vulnerabilities, and consequences of a major homeland security incident or what FEMA’s approach will be for addressing risk through its national preparedness system activities. While the draft operating plan identifies subcomponents in the Preparedness Directorate that will be responsible for carrying out segments of the national preparedness system, it does not discuss the roles and responsibilities of preparedness stakeholders, the coordination that will occur between them, or how the four elements of the national preparedness system will be integrated. The Post-Katrina Act calls on FEMA to develop and coordinate the implementation of a strategy for preparedness, and the complexity and difficulty of developing a national preparedness system underscore the importance of strategic planning. An important element of strategic planning is that it presents an integrated system of high-level decisions that are reached through a formal, visible process. The resulting strategy is thus an effective tool with which to communicate the mission and direction to preparedness stakeholders. The conditions we describe earlier in this report—such as incomplete plans on roles and responsibilities, unresolved corrective actions from exercises, and potential difficulties and historical delays in capability assessments—show that FEMA faces significant challenges in developing the key elements of the national preparedness system. In 2004, we identified six desirable characteristics of an effective national strategy that help achieve strategy success. These characteristics are summarized in table 3. We believe these characteristics can assist responsible parties, such as FEMA, in further developing and implementing national strategies as well as enhancing these strategies’ usefulness for policy decisions to help achieve program results and accountability. The characteristics are a starting point for developing a strategic plan. However, we believe that an approach incorporating the substance of these characteristics is likely to increase success in strategy implementation. The following describes how each of these characteristics applies to the work of FEMA’s National Preparedness Directorate. Purpose, scope, and methodology: National preparedness is an important part of homeland security efforts outlined in legislation, presidential directives, and policy documents. The National Strategy for Homeland Security recognized the importance of fostering a Culture of Preparedness that permeates all levels of our society, including all preparedness stakeholders. In summarizing lessons learned from Hurricane Katrina, the White House report made over 100 recommendations and concluded that an immediate priority for correcting the shortfalls in the federal response to Hurricane Katrina was to define and implement a comprehensive national preparedness system. We believe a strategic plan for implementing the national preparedness system that includes a clearly stated purpose, scope, and methodology could help the Preparedness Directorate convey to preparedness stakeholders the importance of integrating the multiple elements of the national preparedness system and interagency coordination. Problem definition and risk assessment: As shown by 9/11 and Hurricane Katrina, the nation faces risks from terrorist attacks and man- made and natural disasters. These threats may vary between localities and regions, but responders are to be able to effectively work together in a common language on operational tasks when required. According to the National Preparedness Guidelines, responders must identify and assess risk to ensure the necessary capabilities are available for selecting the appropriate response. In addition, understanding risk involves assessing what vulnerabilities and weaknesses require further attention. The lessons learned from Hurricane Katrina show that federal agencies were not prepared for a catastrophic disaster. Confusion by emergency responders over their roles and responsibilities was widespread and resulted in a slow or fragmented response. To improve response and recovery to all hazards including a catastrophic disaster, the Post-Katrina Act called on FEMA to develop the national preparedness system. Goals, subordinate objectives, activities, and performance measures: The cultural shift of the preparedness community from a response and recovery strategy to a proactive preparedness strategy emphasizes the importance of a strategic plan that includes clear goals, objectives, activities, and measures. Identifying performance measures for the various components of the national preparedness system, which is also a requirement under the Post-Katrina Act, will help policymakers determine what progress has been made and what remains to be done, especially as it relates to preparedness for a catastrophic disaster. A strategic plan that outlines an overarching goal, subordinate objectives, activities, and performance measures for the various components of the national preparedness system would help FEMA prioritize future efforts and allow decision makers to measure progress. Resources, investments, and risk management: Preparedness agencies are to manage their likely risks and direct finite resources to the most urgent needs. The national preparedness system helps inform decision makers in federal and state agencies on their use of resources relative to their level of capabilities achievement. Different states and areas face different risks, and thus should have different capabilities to mitigate those risks. As we reported in March 2008, although DHS has taken some steps to establish goals, gather information, and measure progress, its monitoring of homeland security grant expenditures does not provide a means to measure the achievement of desired program outcomes. FEMA’s current efforts do not provide information on the effectiveness of those funds in improving the nation’s capabilities or reducing risk. The National Strategy for Homeland Security describes how resources and risk management must be addressed in a comprehensive approach. For example, the strategy states that “We must apply a risk-based framework across all homeland security efforts in order to identify and assess potential hazards, determine what levels of relative risk are acceptable, and prioritize and allocate resources among all homeland security partners, both public and private, to prevent, protect against, and respond to and recover from all manner of incidents.” A strategic plan that outlines resources, investments, and risk management would help FEMA coordinate a prioritized approach. Organizational roles, responsibilities, and coordination: Achieving national preparedness, especially for catastrophic incidents, requires sharing responsibility horizontally with other federal departments and agencies. It also requires a robust vertical integration of the federal, state, local, and tribal governments, as well as private entities. FEMA’s Preparedness Directorate faces the challenge of aligning operations of the nation’s preparedness stakeholders to coordinate activities and plans to implement a national preparedness program capable of dealing with catastrophic incidents. A key part of a national preparedness strategic plan would be the clear delineation of organizations and their roles and responsibilities, as well as processes to coordinate their responsibilities. Integration and implementation: A national preparedness strategic plan would help describe how preparedness agencies at all government levels and sectors will integrate their various standards, policies, and procedures into the national preparedness system. Plans describing how to integrate and implement the various elements of the national preparedness system would help FEMA inform emergency managers, first responders, and decision makers on how the individual elements of the national preparedness system will improve capabilities, training, and plans for all hazards, including catastrophic disasters. A strategic plan to implement the national preparedness system would enable FEMA’s Preparedness Directorate to improve its likelihood of achieving its vision, evaluating progress, and ensuring accountability of federal agencies and other organizations in aligning their efforts to develop and improve the national preparedness system. While it may be impossible to have absolute compatibility because of the many public and private organizations involved, the danger in organizations using different methods or systems, without some overall guidance to assure consistent application of approaches, is that the elements of the national preparedness system will have little ability to inform one another. More importantly, these systems may produce unreliable or incomplete data on how to improve programs related to response and recovery. FEMA plays a crucial role in this regard through its statutory responsibility of carrying out the Post-Katrina Act requirements for the national preparedness system. The nation looks to FEMA for leadership to ensure that stakeholders involved in preparedness activities can effectively provide a coordinated response to man-made or natural disasters. The nation’s experiences after the events of 9/11 and the 2005 hurricane season dramatically demonstrated our emergency preparedness capabilities and where they were lacking. The Post-Katrina Act’s centralization of responsibility in FEMA for exercises and the other primary activities that form the national preparedness system provides an unprecedented opportunity for comprehensive integration and coordination. While FEMA has made progress in implementing each of these interdependent and essential preparedness activities, it is difficult to measure this progress. FEMA lacks a comprehensive approach to managing the development of policies and plans and overseeing the National Exercise Program. Additionally, FEMA has not established a clearly defined course of action to assess capabilities based on quantifiable metrics. Finally, FEMA has not established a strategic plan for integrating these elements of the national preparedness system. These conditions show that much remains to be done. In the short term, progress is heavily dependent on continuing to improve basic policies and procedures, management tools, and project plans for key elements of the system. Developing each element of the system is undoubtedly a complex task, but progress has to be built on these incremental but critical steps. In the long term, progress will be increasingly dependent on how well FEMA coordinates with the thousands of stakeholders in the system and the degree to which it can integrate the plans, exercises, and assessments into a cohesive approach that improves national preparedness. This need centers attention on leadership and guidance from the Preparedness Directorate and success will depend on linking the various elements of the system and showing how data and information from the system will inform program and budget decisions related to improving preparedness. A complete, integrated set of national preparedness policies and plans that defines stakeholders’ roles and responsibilities at all levels is needed to ensure that federal, state, and local resources are invested in the most effective exercises. As one program official told us, “If an exercise is testing an inadequate plan, then the exercise is just an experiment….” Until all national preparedness policies are developed and operational plans are created or revised to reflect changes in the roles and responsibilities of key stakeholders, FEMA’s ability to update requisite training to prepare officials responsible for fulfilling these roles and reflect the preparedness lessons of the unprecedented disasters of the last decade in exercises and real-world response will be limited. Without a program management plan, FEMA cannot effectively ensure, in coordination with DHS and other federal entities, that it will complete and integrate key policies and plans with each other and the national preparedness system as envisioned by law and presidential directive. In implementing the latest iteration of the National Exercise Program, FEMA has issued guidance and requirements for exercise design, execution, evaluation, and corrective action resolution. However, federal and state exercise officials have not yet fully embraced the essential program components. In addition, opportunities to make exercises as realistic as possible by coordinating more fully with all preparedness stakeholders, including the National Council on Disability, and translating the experiences of real-world incidents into corrective actions could further enhance the value of the exercise program. Establishing policies and procedures that detail how FEMA would work with federal entities as well as monitor states’ compliance in implementing the program would help stakeholders meet program requirements and FEMA develop complete and accurate information on program implementation. More importantly, these key program controls, once more systematically established and applied, will enhance FEMA’s ability to assess the extent to which corrective actions have been implemented and, ultimately, describe strengths and weaknesses in the nation’s preparedness capabilities. Because a comprehensive assessment of national preparedness capabilities is a monumental task, it is understandable that FEMA’s efforts to develop and implement an assessment approach have been underway for more than a decade. Program officials with responsibility for this most recent effort have recognized the need for a comprehensive set of metrics to identify needed capabilities in equipment, personnel, skills, or processes and to prioritize national investments in preparedness. Given the complexity of this effort, they would benefit from a clear roadmap that details their analytical approach for integrating disparate information sources, identifies associated program risks, establishes more specific milestones to help avoid unexpected setbacks, and provides a basis for assessing program progress and making revisions, if needed, to the agency’s implementation plans. Such mitigation efforts would not eliminate the risks associated with the development of the comprehensive assessment system and target capabilities metrics, but they would provide a basis for holding officials responsible for timely and quality results and hold them harmless for unavoidable or unforeseen events that could delay their efforts. Finally, effective coordination, integration, and implementation of these elements of the national preparedness system require the combined contributions of a broad range of federal, state, and local stakeholders. FEMA has started this integration effort and has had some success in issuing guidelines and requirements that seek greater uniformity of effort. But our work shows that issuing guidelines alone does not assure consistent application across organizations. While it may be impossible to have absolute compatibility because of the many public and private organizations involved in preparedness, the danger in organizations using different methods or systems—without some overall guidance, direction, and controls in place to assure consistent application of preparedness approaches—is that the elements of the preparedness system will have little ability to inform one another. Perhaps, more important, these systems may produce unreliable or incomplete data on how to allocate resources or to improve programs related to response and recovery, especially with respect to catastrophic incidents. In this regard, FEMA officials have noted that their authority is limited to coordinating with, but not directing, other federal agencies. This condition highlights the importance of developing a strategic approach that leads to partnerships with stakeholders whose cooperation is necessary for developing the preparedness system. Making progress with regard to this challenge is a necessary step to assessing our nation’s capabilities and dealing with gaps in preparedness. The scope and breadth of this critical national effort suggests that an explicit description and elaboration of the elements of the system and the level of effort associated with its effective application could enhance stakeholder acceptance and participation. In addition, defining the end state of the preparedness system will help translate requirements from presidential directives and the Post-Katrina Act into measurable steps for achieving an integrated national system. Developing goals and metrics to measure progress towards achieving an integrated system will help FEMA prioritize actions, requirements, and national investments in preparedness. A strategic plan for the National Preparedness Directorate that describes how it will approach these challenges and mitigate these weaknesses would help FEMA partner with the many organizations whose cooperation and resources are necessary for success. To ensure that key elements of the national preparedness system are developed in a timely and integrated fashion, we recommend that the Administrator of the Federal Emergency Management Agency take the following 11 actions: Direct the Disaster Operations Directorate and the National Preparedness Directorate to improve their approach to developing policies and plans that define roles and responsibilities and planning processes. Develop a program management plan, in coordination with DHS and other federal entities, to ensure the completion of the key national preparedness policies and plans called for in legislation, presidential directives, and existing policy and doctrine, to define roles and responsibilities and planning processes, as well as to fully integrate such policies and plans into other elements of the national preparedness system. The program management plan, among other things, should: identify the specific schedule of activities that needs to be performed to complete, and identify dependencies among, all policy and planning development and integration activities; identify the type and quantities of resources required to perform, and the schedule for completing, all policy and planning development and integration activities; analyze activity sequences, durations (including the time required to partner and coordinate on an interagency basis with other federal entities), resource requirements, and schedule constraints to create and update the individual policy and plan development project schedules; and control for changes to the project schedules precipitated by outside forces. When outstanding policies and plans are completed, integrate them into training and exercise efforts to ensure that roles and responsibilities are fully communicated and fully understood by emergency response stakeholders. Direct the National Exercise Division to improve its implementation of statutory and program requirements. Coordinate with the Department of Homeland Security to develop policies and procedures for issuing after-action reports for National Level Exercises (i.e., TOPOFF) in 6 months or less, as required by the implementation plan for the National Exercise Program. Collaborate with the Homeland Security Council to establish policies and procedures for documenting corrective actions from Principal Level Exercises that are consistent with HSEEP guidance and the implementation plan for the National Exercise Program. Collaborate with the Homeland Security Council to provide FEMA with the information it needs from past principal level exercises to enable it to conduct remedial action tracking and long-term trend analysis, as required by the Post-Katrina Act. Ensure compliance by states that receive grant funds with HSEEP requirements by revising FEMA’s grant monitoring guidance, for example by including a checklist of specific HSEEP requirements for state validation and certification. Involve the National Council on Disability on committees involved in the design and execution of national level exercises, especially on issues related to populations with special needs. Develop internal control policies and procedures that validate the completeness and accuracy of data used to measure program performance. Such procedures could involve checking whether states and federal agencies are providing data and information needed to measure the performance of the program. Revise the National Exercise Program Implementation Plan to require the use of FEMA’s Corrective Action Program for all federal exercises that involve interagency testing of roles and responsibilities or require that federal agencies submit a report to FEMA on the status of their corrective actions resulting from such exercises. Develop procedures for including “lessons learned” from real-world incidents in the Corrective Action Program system. Direct the Office of Preparedness Policy, Planning, and Analysis to improve its approach for developing a comprehensive assessment system. Enhance its project management plan to include milestone dates, an assessment of risk, and related mitigation strategies for (1) comprehensively collecting and reporting on disparate information sources, (2) developing quantifiable metrics for target capabilities that are to be used to collect and report preparedness information, and (3) reporting on the results of preparedness assessments to help inform homeland security resource allocation decisions. Direct the National Preparedness Directorate to take a more strategic approach to developing the national preparedness system. Develop a strategic plan for implementing the national preparedness system that includes the key characteristics of a strategic plan, including coordination, integration, and implementation approaches. We provided a draft of this report to DHS for comment. In commenting on our draft report, DHS noted that while it may not agree with all the assertions in the report, it generally agreed with our recommendations. DHS stated that GAO’s recommendations provide a useful methodology and sound counsel for revision of FEMA’s current portfolio of national preparedness policy, plans, protocols, and procedures. Specifically, DHS stated that FEMA has already made significant inroads in each aspect of the GAO recommended characteristics for sound strategic planning. DHS also expressed concern that the report suggests that DHS/FEMA should hold other federal agencies and departments or state, local, or tribal governments accountable for compliance with program requirements, while also recognizing that FEMA did not generally have the explicit authority to compel compliance. The Post-Katrina Act designates FEMA as the federal leader and coordinator for developing and implementing the national preparedness system. We recognize that FEMA’s authority is generally to coordinate, guide, and support, rather than direct, and that collaboration is an essential element of FEMA’s efforts. At the same time, we believe that FEMA’s expanded leadership role under the Post-Katrina Act provides FEMA opportunities for and a responsibility to further develop its relationships with national preparedness stakeholders at the local, state, and federal levels and to instill a shared sense of responsibility and accountability on the part of all stakeholders for the successful development and implementation of the national preparedness system. Several of our recommendations aim to enhance such collaboration and cooperation. DHS also provided technical comments, which we incorporated into the report as appropriate. Appendix V contains written comments from DHS. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time we will send copies of this report to the Secretary of Homeland Security, the Director of the Office of Management and Budget, and interested congressional committees. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8757 or e-mail at jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. This review examined key elements of the national preparedness system, including the National Exercise Program. Specifically, our reporting objectives were to review the extent to which the Federal Emergency Management Agency (FEMA) has: 1. developed policies and plans that define roles and responsibilities and planning processes for national preparedness; 2. taken actions since 2007 to implement the National Exercise Program and track corrective actions at the federal and state levels and what challenges remain; 3. made progress in conducting a nationwide capabilities-based assessment, including developing required preparedness reports, and what issues, if any, it faces in completing the system; and, 4. developed a strategic plan for implementing the national preparedness system. To address these objectives, we analyzed information and data on FEMA’s policies and plans for preparedness, the National Exercise Program, its approach for developing a comprehensive system for assessing nationwide capabilities, and its strategy for integrating elements of the preparedness system. GAO explored the option of selecting exercises to review, but there is no national database that captures all exercises conducted using Homeland Security Grant Program funds. Therefore we selected six states—California, Georgia, Illinois, New York, Texas, and Washington— that provide examples of how exercises are planned and conducted and visited these six states. While we cannot generalize our work from these visits to all states, we chose these locations to provide examples of the way in which states carry out their exercise and preparedness programs. In selecting these states, we considered factors such as states’ participation in national-level exercises; states located in different geographic locations, such as those in hurricane-prone regions; and states with varying percentages of homeland security grant funding planned to support exercises. At each location, we interviewed staff in FEMA’s regional offices responsible for regional preparedness activities. We interviewed state and local officials on their progress and challenges in carrying out preparedness activities, including exercises and assessments of capability. We analyzed key legislation such as the Post-Katrina Act and the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Act) as well as presidential directives related to preparedness efforts. We also interviewed FEMA officials responsible for preparedness programs to learn more about the actions they had taken and planned to take related to preparedness efforts and compared FEMA’s policies and procedures with criteria in GAO’s standards for internal control in the federal government. To analyze the extent to which policies and plans have been developed to define roles and responsibilities and planning processes for national preparedness, we analyzed key legislation, presidential directives, and DHS- and FEMA-issued policies that identify required preparedness policies and plans to define roles and responsibilities for emergency response as well as establish guidance for planning processes for developing emergency response plans. We identified the resulting policies that define roles and responsibilities and that form the basis of the national preparedness system, including the National Strategy for Homeland Security, the National Response Framework (NRF), the National Preparedness Guidelines, and the National Incident Management System. In addition, we identified related policies that supplement these documents, such as guidance for Joint Field Office operations. We also identified policies that define planning processes for developing emergency plans, such as the draft Integrated Planning System (IPS) and FEMA-issued interim and final Comprehensive Preparedness Guides for nonfederal planning efforts. We identified plans developed using planning processes that further define and operationalize roles and responsibilities identified in existing policies. These plans include the incident annexes and the incident annex supplements for the NRF, FEMA’s Pre-Scripted Mission Assignment Catalog, plans being developed as part of FEMA’s Catastrophic Disaster Planning Initiative, as well as plans called for by HSPD 8 Annex 1 that are to be developed using IPS. To identify lessons learned and corrective actions related to roles and responsibilities from federal emergency response exercises, we summarized lessons learned from after-action reports for Tier I and II (or equivalent) exercises from 2005 through 2008. The exercises that comprised this data set were identified by FEMA officials as well as counsel to the White House Homeland Security Council. This analysis was conducted to determine whether the exercises revealed unclear or conflicting roles and responsibilities between federal departments and agencies and if additional policies and plans were needed. We also interviewed officials from DHS’s Office of Operations Coordination and FEMA’s National Preparedness Directorate and Disaster Operations Directorate to obtain information on the status of efforts to develop and issue required preparedness policies and plans, including any existing program or project management plans and related issuance schedules. We compared policies and plans that have been published in a final form, versus released in interim or draft formats or that have not yet been developed, to determine the issuance status (completed, partially completed, or incomplete) of these policies and plans. To identify best practices for program management, such as steps for how a program is to be executed, monitored, and controlled, we reviewed the Project Management Institute’s The Standard on Program Management. Finally, we reviewed our prior reports on FEMA’s preparedness programs and planning efforts, as well as prior DHS, White House, and congressional reports on the lessons learned from the response to Hurricane Katrina in 2005. To assess the extent to which FEMA has taken actions since 2007 to implement a National Exercise Program and track corrective actions, we observed portions of two National Exercise Program exercises (TOPOFF 4 and the May 2008 Tier II exercise). Specifically, during TOPOFF 4, we discussed exercise implementation with federal, state, and local officials and observed FEMA’s exercise management efforts at the TOPOFF 4 Master Control Cell in Springfield, Virginia; the TOPOFF 4 Long-Term Recovery Tabletop Exercise in Washington, D.C.; and exercise implementation in Portland, Oregon. During the May 2008 Tier II exercise, we observed exercise implementation in Mount Weather and Suffolk, Virginia and in Blaine, Washington and discussed the exercise with participating federal, state, and local officials. We evaluated key program documents, such as the implementation plan for the National Exercise Program and the Homeland Security Exercise and Evaluation Program (HSEEP)—FEMA’s guidance for carrying out exercises in accordance with the Post-Katrina Act and the 9/11 Act and data on the program’s performance measures. We reviewed actions taken by FEMA since 2007 because the National Exercise Program Charter was established in January 2007 and the implementation plan was issued in April 2007. We examined after-action reports for Principal Level Exercises-–exercises which involve senior federal officials, such as Deputy Secretaries of departments or agencies-–that were issued from April 2007 through August 2008 to determine whether the Homeland Security Council developed after-action reports. We also interviewed Homeland Security Council staff and the Associate Counsel to the President on the role and responsibility of the council for systematically tracking and implementing corrective actions resulting from Principal Level Exercises because these staff were responsible for summarizing corrective actions for Principal Level Exercises. We also reviewed after-action reports that were provided to us by the six states we visited for exercises conducted from June 2007 through September 2008 that used Homeland Security Grant Program funds, in order to determine how well these states were complying with HSEEP and grant guidance. To determine if FEMA is conducting monitoring and oversight of Homeland Security Grant Program recipients, we reviewed grant monitoring reports for the six states we visited. For information that would provide a broader perspective on FEMA’s efforts, we examined several FEMA databases, including the FEMA Secure Portal—the FEMA repository of after-action reports; the National Exercise Schedule (NEXS) system—a scheduling system for all exercises; and the Corrective Action Program (CAP) system—which is designed for tracking capability-based improvement plans entered by federal, state, and local exercise participants. We assessed the reliability of the FEMA Secure Portal and NEXS databases by checking the systems to determine if known exercises identified through after-action reports produced by states were included in these systems, and by interviewing FEMA and state officials responsible for the data. In addition, we assessed the reliability of the CAP system by interviewing FEMA and Homeland Security Council officials responsible for the data. We concluded the data in the FEMA Secure Portal, the NEXS system, and the CAP system were not reliable for use in this report because these databases lacked complete information related to after-action reports, scheduled exercises, and corrective actions and FEMA does not have procedures in place to ensure that required data are collected consistently to populate these databases. To determine the extent to which FEMA has made progress and issues FEMA has encountered in conducting a nationwide capabilities-based assessment and developing required preparedness reports and any issues it faces in completing the system, we (1) analyzed FEMA’s plans and schedules for developing the comprehensive assessment system and performance objectives for measuring capabilities, including assessment efforts initiated by FEMA’s National Preparedness, Disaster Operations, and Grant Programs Directorates, and (2) interviewed FEMA staff responsible for these efforts. We also reviewed assessments previously conducted by DHS and FEMA to evaluate historical efforts to assess capabilities. To assess FEMA’s efforts to establish quantifiable metrics for target capabilities, we analyzed preliminary performance objectives for two target capabilities that FEMA had completed, and interviewed headquarters staff responsible for these efforts. We also reviewed information pertinent to FEMA’s assessment approach, including the Federal Preparedness Report (issued by FEMA in January 2009) and State Preparedness Reports for the 2007 reporting year for the six states we visited (reports for 2008 were due to FEMA after this report was finalized for publication). To assess the comparability of information contained in the six State Preparedness Reports, we selected one target capability— mass prophylaxis—outlined in the Target Capabilities List, and reviewed a performance measure associated with this capability for providing initial prophylaxis (an action taken to prevent a disease or a health problem) within 48 hours of a state/local decision to provide prophylaxis. To identify best practices for project management, such as steps for how a project is to be executed, monitored, and controlled, we reviewed the Project Management Institute’s A Guide to the Project Management Body of Knowledge (PMBOK) and compared FEMA’s efforts for developing and implementing the comprehensive assessment system to the best practices developed by the institute. Finally, we reviewed our prior GAO reports on FEMA’s preparedness programs and exercise efforts. To determine the extent to which FEMA’s National Preparedness Directorate has developed a strategic plan that implements the national preparedness system, we interviewed FEMA National Preparedness Directorate officials on strategic planning and policy and procedures for the National Preparedness System. We analyzed the Post-Katrina Act and key FEMA documents—including the agencywide Strategic Plan for Fiscal Years 2008 through 2013, the Grants Program Directorate Strategic Plan, and the draft annual National Preparedness Directorate Operating Plan— to determine strategy-related requirements. Additionally we reviewed DHS and DHS Office of Inspector General reports on national preparedness. To determine the elements that comprise a strategic plan, we examined our prior reports on the desirable characteristics of effective national strategies and compared them with FEMA’s current approach for developing a National Preparedness System. Finally, we compared the desirable characteristics to the work of the National Preparedness Directorate as described in the Homeland Security Council’s National Strategy for Homeland Security, DHS’s National Preparedness Guidelines, and the Post-Katrina Act. We conducted this performance audit from January 2008 through April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In September 2006, we reported that in preparing for, responding to, and recovering from any disaster, the legal authorities, roles and responsibilities, and lines of authority for the preparation and response at all levels for government and nongovernmental entities must be clearly defined, effectively communicated, and well understood in order to facilitate rapid and effective decision making. National preparedness policies and plans identify these legal authorities, roles and responsibilities, and lines of authority for response activities and serve to communicate this information to emergency response stakeholders, and, in conjunction with training, are the basis for ensuring that the information is well understood. Effective and efficient disaster management relies on the thorough integration of these emergency response policies and plans. Because emergency response activities entail large numbers of stakeholders who need to be able to respond to an incident in a coordinated and integrated manner, it is essential that roles and responsibilities are defined, communicated, and understood prior to a real-world incident response. An example of the range of stakeholders involved in such response activities is illustrated by figure 6. The wide range of emergency response stakeholders depicted in figure 6, among others, are to be organized by the roles and responsibilities defined in policies and plans that are designed to facilitate an effective response to an incident, be it man-made (e.g., terrorism) or a natural disaster. Policies that broadly define roles and responsibilities are operationalized by the development of plans that provide greater levels of detail. These detailed plans are to be developed using the planning processes that are discussed and established in federal policies. The range and relative relationships of the policies that define roles and responsibilities and planning processes for developing emergency plans are illustrated in figure 7. Table 4, which follows figure 7, describes each of the policies presented in the graphic. Among the 50 policies that define roles and responsibilities or planning processes, 46 define roles and responsibilities and 4 define planning processes for developing emergency plans. Table 4 provides brief descriptions and the status of the policies depicted in figure 7 that define roles and responsibilities. Of the 46 policies presented in table 4, 40 have been completed and 6 are incomplete. a standardized federal planning process (the Integrated Planning System); resourced operational and tactical planning capabilities at each federal department and agency with a role in homeland security; strategic guidance statements, strategic plans, concepts of operations, operations plans, and, as appropriate, tactical plans for each National Planning Scenario; and a system for integrating plans among all levels of government. It also calls for the development of the National Homeland Security Plan. Policy and doctrine (Issued by DHS or FEMA) The NSHS is to guide, organize, and unify homeland security efforts by providing a common framework for the prevention of terrorist attacks; protection of people, critical infrastructure, and key resources; and response to and recovery from man-made and natural disasters. It calls for homeland security management through a continuous, mutually reinforcing cycle of four activity phases: (July 2002; revised October 2007) 1. overarching homeland security guidance grounded in clearly articulated and up-to-date homeland and relevant national security policies, with coordinated supporting strategies, doctrine, and planning guidance flowing from and fully synchronizing with these policies; 2. a deliberate and dynamic system that translates policies, strategies, doctrine, and planning guidance into a family of strategic, operational, and tactical plans; 3. the execution of operational and tactical-level plans; and 4. continual assessment and evaluation of both operations and exercises. NHSP, as called for under HSPD 8 Annex 1, is to be an overarching strategic plan to guide national efforts to execute the National Strategy for Homeland Security. It is intended to: facilitate federal homeland security coordination, (release schedule undetermined) define roles and responsibilities for preventing, protecting against, responding to, and recovering from man-made and natural disasters. It was to be submitted to the President for approval within 120 days of the approval of HSPD 8 Annex I (December 2007); however, as of April 2009, the NHSP has yet to be published. NIMS presents a core set of doctrine, concepts, principles, procedures, organizational processes, terminology, and standard requirements designed to enable effective, efficient, and collaborative incident management. It forms the basis for interoperability and compatibility to enable a diverse set of public and private organizations to conduct integrated emergency management and incident response operations. (March 2004; revised December 2008) (interim NPG, March 2005; final NPG and 3 capabilities-based preparedness tools, September 2007) The NPG consists of a vision, capabilities, and priorities for national preparedness. It establishes three capabilities-based preparedness tools (Universal Task List, Target Capabilities List, and National Planning Scenarios) and the National Preparedness System cycle (plan; organize and staff; equip; train; and exercise, evaluate, and improve) to collate existing homeland security plans, strategies, and systems into an overarching framework. The revised and finalized NPG replaces the interim National Preparedness Goal released in March 2005. The NRF is a guide to how the nation conducts all-hazards response, generally describing national response doctrine and the roles and responsibilities of officials involved in response efforts, including, among others, the Secretary of Homeland Security, FEMA Administrator, Principal Federal Official, and Federal Coordinating Officer. It is designed to align key roles and responsibilities, linking all levels of government, nongovernmental organizations, and the private sector, as well as capture specific authorities and best practices for managing incidents that range from the serious but purely local, to large-scale terrorist attacks or catastrophic natural disasters. (January 2008) NRF ESF Annexes align categories of federal government response resources and capabilities and provide strategic objectives for their use under the NRF. They provide the structure for coordinating federal interagency support for a federal response to an incident and are mechanisms for grouping functions most frequently used to provide federal support to states, and federal-to-federal support. Each ESF Annex, such as Search and Rescue, identifies the federal agency coordinator and the primary and support agencies pertinent for the ESF, and when activated, the initial actions delineated in the ESF Annexes guide response activities. (January 2008) NRF Support Annexes describe the roles and responsibilities of federal departments and agencies, nongovernmental organizations, and the private sector in coordinating and executing the functional processes and administrative requirements necessary for incident management that are common to all incidents. They identify the federal agency coordinator and the primary and support agencies pertinent for the support activity, such as financial management or private-sector coordination, and when activated, the initial actions delineated in the Support Annexes guide response activities. (January 2008) NRF Partner Guides are to provide stakeholder-specific references describing key roles and actions for local, tribal, state, federal, private- sector, and nongovernmental response partners. They are to summarize core NRF concepts and be tailored specifically to leaders at different levels of government and from different types of organizations. As of April 2009, none of the four NRF Partner Guides have been published. (release schedule undetermined) Completed (April 2006) Completed (April 2006) The various Joint Field Office (JFO) guides were written to support the December 2004 National Response Plan for the establishment of JFOs. The JFO is a temporary federal multi-agency coordination center established locally to facilitate coordinated field-level domestic incident management activities, with the JFO IISOP, JFO A&A, and JFO FOG designed to assist personnel assigned to response operations. In particular, the JFO IISOP and JFO A&A provide detailed guidance on JFO activations and operations, including defining the roles and responsibilities and concept of operations for the Principal Federal Official cell within the JFO. The JFO FOG is intended to be used as a quick reference job aid for JFO personnel. Completed (June 2006) According to FEMA officials, FEMA’s Disaster Operations Directorate is developing a JFO Organization & Functions Manual, reflecting the NRF, as an addition to existing JFO guidance. This new manual is to provide guidance for the establishment, operation, and demobilization, as well as the general organization and staffing, of JFOs (release schedule undetermined) The type column identifies the categorization of policy and plans. Policies are categorized under the term “policy & doctrine” (including legislation, presidential directives, and other policies that define roles and responsibilities, as well as policies that define planning processes), per the national preparedness cycle presented in the National Preparedness Guidelines. Plans that define roles and responsibilities and that are to operationalize policies are categorized under the term “planning & resource allocation,” also per the national preparedness cycle presented in the National Preparedness Guidelines. Table 5 provides brief descriptions and the status of the policies that define planning processes for developing the emergency plans depicted in figure 7. Of the four policies presented in the table, two have been completed and two are partially completed. The range and relative relationships of the plans that define roles and responsibilities are illustrated in figure 8. Table 6, which follows figure 8, describes each of the plans presented in the graphic. Table 6 provides brief descriptions and the status of the plans that define roles and responsibilities that are depicted in figure 8. Of the 72 plans presented in the table, 20 have been completed, 3 have been partially completed, and 49 are incomplete. As noted in table 6, a schedule exists for the release of the myriad plans called for under HSPD 8 Annex 1 using the not yet published Integrated Planning System (IPS). Figure 9 presents the schedule for the release of specific plans under IPS. Finally, figure 10 shows the combined universe of all the policies and plans in figures 7 and 8 in relation to each other. Figure 11, which follows, shows the status of development of each of the policies and plans. This appendix presents additional information regarding the Federal Emergency Management Agency’s (FEMA) progress and any remaining issues it faces in conducting a nationwide comprehensive assessment system. FEMA has identified various ongoing and historical assessment efforts that it plans to use to inform the development of the comprehensive assessment system. Additional information regarding these efforts is outlined below. State Preparedness Reports. While state preparedness reports assess capabilities within states, FEMA could not use information in the 2007 reports to compare capability gaps between states, because states did not report information using common metrics to assess capabilities and data were not always available to consistently complete the report. FEMA’s state preparedness guidance explains that states are to “use relevant metrics . . . from the Target Capabilities List when describing current capabilities.” However, FEMA has not developed a framework for states to use in reporting their current capabilities against the target capabilities because FEMA is in the process of (1) developing quantifiable metrics for the target capabilities and (2) revising the reporting format for state preparedness reports in order to base them on the target capabilities. As a result, the 2007 reports do not report state capabilities in a measurable way, or with the level of detail necessary for a comparison across states and territories. In addition, the six states we visited used different techniques to summarize their capabilities. In one location, a state homeland security task force held discussion groups to determine what the capability needs are and what resources are needed. In another location, the state held a workshop attended by stakeholders from across the state to collect and obtain input on capability needs to complete the state preparedness report. Two states relied on information collected for their respective state homeland security strategies. A fifth state primarily used information it had gathered to prepare a grant funding reporting requirement, while officials at a sixth state collected information through site visits. FEMA headquarters officials explained that they intend to use the target capabilities as the framework for future state preparedness reports. National Incident Management System Compliance Assessment Support Tool (NIMSCAST). NIMSCAST is a Web-based tool to assess states’ and territories’ compliance with the NIMS, which is a standardized process by which emergency responders are to conduct integrated incident response operations, rather than a method for assessing capabilities. Assessing compliance with National Incident Management System requirements is one of the requirements for the comprehensive assessment system. In February 2009, FEMA indicated that it will use NIMSCAST to continue collecting data on NIMS compliance in addition to collecting capability and state preparedness report data through a survey it will distribute in 2009. FEMA noted that this effort will consolidate reporting requirement s and fulfill Post-Katrina Act requirements for the comprehensive assessment system. In addition, FEMA noted that it will use d collected through NIMSCAST related to compliance with incident management processes and procedures to directly inform the of two target capabilities: On-Site Incident Management and Emergency Operations Center Management. However, it is unclear how FEMA will integrate this tool with other features of the comprehensive assessment system to assess the remaining 35 capabilities that will not be directly informed by NIMSCAST data related to compliance with incident manag procedures. FEMA and localities better target and measure the results of using federal grant funds. As its organizing structure, the C2C uses th National Planning Scenarios, such as a hurricane, earthquake, improvised explosive device, or anthrax attack. Data for the C2C are t be based on self assessments of capabilities from state preparedne plans, estimates of baseline capability, and the estimated relative capability improvement expected from a requested level of grant investment. To be used effectively and enable comparisons across jurisdictions in evaluating grant proposals, the state and local data for assessing state and local capabilities must be in the common language of target capabilities and have metrics that are compatible with C2C. These metrics are being developed by FEMA’s National Preparedness Directorate. However, grantee use of C2C will not be mandatory, a nd thus its ultimate value is yet to be determined. In developing the initiative, grant officials are considering ways to collect data from stakeholders with a minimal burden and integrate analyses re from the C2C into existing programs, plans, and procedures. National Preparedness System (NPS). The NPS was discontinuedta. because it was time consuming and did not produce meaningful da This Web-based management information system was designed to serve as an inventory tool to measure a jurisdiction’s ability to deliver elements of planning, organization, equipment, training, and ex Although it was developed in response to Homeland Security Presidential Directive 8’s preparedness requirements, in conjunction with the Target Capabilities List, and pilot tested in 10 states, the system was discontinued by the Department of Homeland Security because officials said it was too time consuming to use, according to FEMA officials. Because it was only piloted, FEMA officials explained that it did not generate meaningful preparedness information from the data collected. According to FEMA budget documentation required by the Office of Management and Budget for major information technology investments, FEMA spent nearly $15 million in total on the system for 2006, 2007, and 2008 before it was discontinued. ercises. Pilot Capability Assessment (PCA). The PCA was labor intensive he and did not generate meaningful data. This assessment, based on t 37 target capabilities, was also intended to measure jurisdictions’ progress in achieving needed target capabilities. While it was developed in response to HSPD 8 preparedness requirements and in conjunction with the Target Capabilities List and pilot tested in six states, FEMA officials said it was too labor intensive. Because only piloted, FEMA did not generate meaningful preparedness information from the data collected, according to FEMA officials. Capability Assessment for Readiness (CAR). The CAR lacked controls for validating the accuracy of self-reported assessment data. This assessment was proposed as a one-time nationwide assessmen performance in areas such as planning and hazard management to assess a national set of emergency management performance criteria for FEMA grant recipients. FEMA committed to preparing this assessment in hearings before the Senate Committee on Appropriations. The assessment was conducted in 1997 but concerns reported by the DHS Inspector General in March 2006 regarding self- reporting and the lack of controls for validating information reported by states limited the reliability and, therefore, the value of the data. The assessment was conducted once in 1997. In February 2004, we identified six desirable characteristics of an effective national strategy that would enable its implementers to effectively shape policies, programs, priorities, resource allocations, and standards and that would enable federal departments and other stakeholders to achieve the identified results. We further determined in that report that national strategies with the six characteristics can provide policy makers and implementing agencies with a planning tool that can help ensure accountability and more effective results. To develop these six desirable characteristics of an effective national strategy, we reviewed several sources of information. First, we gathered statutory requirements pertaining to national strategies, as well as legislative and executive branch guidance. We also consulted the Government Performance and Results Act of 1993, general literature on strategic planning and performance, and guidance from the Office of Management and Budget on the President’s Management Agenda. In addition, among other things, we studied past reports and testimonies for findings and recommendations pertaining to the desirable elements of a national strategy. Furthermore, we consulted widely within GAO to obtain updated information on strategic planning, integration across and between the government and its partners, implementation, and other related subjects. We developed these six desirable characteristics based on their underlying support in legislative or executive guidance and the frequency with which they were cited in other sources. We then grouped similar items together in a logical sequence, from conception to implementation. The following sections provide more detail on the six desirable characteristics. Purpose, scope, and methodology: This characteristic addresses why the strategy was produced, the scope of its coverage, and the process by which it was developed. For example, a strategy should discuss the specific impetus that led to its being written (or updated), such as statutory requirements, executive mandates, or other events like the global war on terrorism. Furthermore, a strategy would enhance clarity by including definitions of key, relevant terms. In addition to describing what it is meant to do and the major functions, mission areas, or activities it covers, a national strategy would ideally address its methodology. For example, a strategy should discuss the principles or theories that guided its development, the organizations or offices that drafted the document, or working groups that were consulted in its development. Problem definition and risk assessment: This characteristic addresses the particular national problems and threats at which the strategy is directed. Specifically, this means a detailed discussion or definition of the problems the strategy intends to address, their causes, and operating environment. In addition, this characteristic entails a risk assessment, including an analysis of the threats to and vulnerabilities of critical assets and operations. If the details of these analyses are classified or preliminary, an unclassified version of the strategy should at least include a broad description of the analyses and stress the importance of risk assessment to implementing parties. A discussion of the quality of data available regarding this characteristic, such as known constraints or deficiencies, would also be useful. Goals, subordinate objectives, activities, and performance measures: This characteristic addresses what the national strategy strives to achieve and the steps needed to garner those results, as well as the priorities, milestones, and performance measures to gauge results. At the highest level, this could be a description of an ideal end state, followed by a logical hierarchy of major goals, subordinate objectives, and specific activities to achieve results. In addition, it would be helpful if the strategy discussed the importance of implementing parties’ efforts to establish priorities, milestones, and performance measures that help ensure accountability. Ideally, a national strategy would set clear desired results and priorities, specific milestones, and outcome-related performance measures while giving implementing parties flexibility to pursue and achieve those results within a reasonable time frame. If significant limitations on performance measures exist, other parts of the strategy should address plans to obtain better data or measurements, such as national standards or indicators of preparedness. Resources, investments, and risk management: This characteristic addresses what the strategy will cost, the sources and types of resources and investments needed, and where those resources and investments should be targeted. Ideally, a strategy would also identify appropriate mechanisms to allocate resources. Furthermore, a national strategy should elaborate on the risk assessment mentioned earlier and give guidance to implementing parties to manage their resources and investments accordingly. It should also address the difficult, but critical, issues about who pays and how such efforts will be funded and sustained in the future. Furthermore, a strategy should include a discussion of the type of resources required, such as budgetary, human capital, information, information technology (IT), research and development (R&D), procurement of equipment, or contract services. A national strategy should also discuss linkages to other resource documents, such as federal agency budgets or human capital, IT, R&D, and acquisition strategies. Finally, a national strategy should also discuss in greater detail how risk management will aid implementing parties in prioritizing and allocating resources, including how this approach will create society-wide benefits and balance these with the cost to society. Related to this, a national strategy should discuss the economic principle of risk-adjusted return on resources. Organizational roles, responsibilities, and coordination: This characteristic addresses what organizations will implement the strategy, their roles and responsibilities, and mechanisms for coordinating their efforts. It helps to answer the question about who is in charge during times of crisis and during all phases of national preparedness: prevention, vulnerability reduction, and response and recovery. This characteristic entails identifying the specific federal departments, agencies, or offices involved, as well as the roles and responsibilities of private sectors. A strategy would ideally clarify implementing organizations’ relationships in terms of leading, supporting, and partnering. In addition, a strategy should describe the organizations that will provide the overall framework for accountability and oversight. Furthermore, a strategy should also identify specific processes for coordination and collaboration between sectors and organizations—and address how any conflicts would be resolved. Integration and implementation: This characteristic addresses both how a national strategy relates to other strategies’ goals, objectives, and activities (horizontal integration)—and to subordinate levels of government and other organizations and their plans to implement the strategy (vertical integration). Similarly, related strategies should highlight their common or shared goals, subordinate objectives, and activities. In addition, a national strategy should address its relationship with relevant documents from implementing organizations, such as the strategic plans, annual performance plans, or the annual performance reports the Government Performance and Results Act requires of federal agencies. A strategy should also discuss, as appropriate, various strategies and plans produced by the state, local, or private sectors. A strategy also should provide guidance such as the development of national standards to link together more effectively the roles, responsibilities, and capabilities of the implementing parties. William O. Jenkins Jr., (202) 512-8757 or jenkinswo@gao.gov. In addition to the contact named above, Chris Keisling (Assistant Director), Neil Asaba (Analyst-in Charge), Joel Aldape, Avrum Ashery, Tina Cheng, Brian Chung, Christine Davis, Lara Kaskie, Ron La Due Lake, Brian Lipman, David Lysy, Jan Montgomery, and Robert Robinson made key contributions to this report. Actions Taken to Implement the Post-Katrina Emergency Management Reform Act of 2006. GAO-09-59R. Washington, D.C.: November 21, 2008. Voluntary Organizations: FEMA Should More Fully Assess Organization's Mass Care Capabilities and Update the Red Cross Role in Catastrophic Events. GAO-08-823. Washington, D.C.: September 18, 2008. Emergency Preparedness: States Are Planning for Medical Surge, but Could Benefit from Shared Guidance for Allocating Scarce Medical Resources. GAO-08-668. Washington, D.C.: June 13, 2008. Emergency Management: Observations on DHS's Preparedness for Catastrophic Disasters. GAO-08-868T. Washington, D.C.: June 11, 2008. National Response Framework: FEMA Needs Policies and Procedures to Better Integrate Non-Federal Stakeholders in the Revision Process. GAO-08-768. Washington, D.C.: June 11, 2008. Homeland Security: DHS Improved its Risk-Based Grant Programs' Allocation and Management Methods, But Measuring Programs' Impact on National Capabilities Remains a Challenge. GAO-08-488T. Washington, D.C.: March 11, 2008. National Disaster Response: FEMA Should Take Action to Improve Capacity and Coordination between Government and Voluntary Sectors. GAO-08-369. Washington, D.C.: February 27, 2008. Continuity of Operations: Selected Agencies Tested Various Capabilities during 2006 Governmentwide Exercise. GAO-08-185. Washington, D.C.: November 19, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July, 31, 2007. Homeland Security: Preparing for and Responding to Disasters. GAO-07-395T. Washington, D.C.: March 9, 2007. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation's Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. GAO's Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Homeland Security: DHS' Efforts to Enhance First Responders' All- Hazards Capabilities Continue to Evolve. GAO-05-652. Washington, D.C.: July 11, 2005. Results-Oriented Government: Improvements to DHS's Planning Process Would Enhance Usefulness and Accountability. GAO-05-300. Washington, D.C.: March 31, 2005. Homeland Security: Agency Plans, Implementation, and Challenges Regarding the National Strategy for Homeland Security. GAO-05-33. Washington, D.C.: January 14, 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005. Homeland Security: Federal Leadership and Intergovernmental Cooperation Required to Achieve First Responder Interoperable Communications. GAO-04-740. Washington, D.C.: July 20, 2004.
Hurricane Katrina was the most destructive disaster in our nation's history and it highlighted gaps in preparedness for a catastrophic disaster. The Federal Emergency Management Agency (FEMA), a component within the Department of Homeland Security (DHS), is the lead federal agency responsible for developing a national preparedness system. The system includes policies and plans as well as exercises and assessments of capabilities across many public and private entities. GAO was asked to assess the extent to which FEMA has (1) developed policies and plans that define roles and responsibilities; (2) implemented the National Exercise Program, a key tool for examining preparedness; (3) developed a national capabilities assessment; and (4) developed a strategic plan that integrates these elements of the preparedness system. GAO analyzed program documents, such as after-action reports, and visited six states located in disaster regions. While the results of these visits are not generalizable, they show how select states carry out their efforts. While most policies (41 of 50) that define roles and responsibilities have been completed, such as the National Response Framework, 68 percent (49 of 72) of the plans to implement these policies, including several for catastrophic incidents, are not yet complete. As a result, the roles and responsibilities of key officials involved in responding to a catastrophe have not been fully defined and, thus, cannot be tested in exercises. The lack of clarity in response roles and responsibilities among the diverse set of responders contributed to the disjointed response to Hurricane Katrina and highlighted the need for clear, integrated disaster preparedness and response policies and plans. Although best practices for program management call for a plan that includes key tasks and their target completion dates, FEMA does not have such a plan. With such a plan, FEMA would be better positioned to ensure that the policies and plans are completed and integrated with each other as intended as well as with other elements of the preparedness system. Since 2007, FEMA has taken actions to implement the National Exercise Program at the federal and state levels by developing, among other things, program guidance and systems to track corrective actions; however, FEMA faces challenges in ensuring that the exercises are carried out consistent with program guidance. For example, the Homeland Security Council (an interagency entity responsible for coordinating homeland security policy) and state participants did not systematically track whether corrective actions had been taken to address deficiencies identified by exercises as called for by program guidance. As a result, FEMA lacks reasonable assurance that entities have taken actions aimed at improving preparedness. FEMA has made progress in developing a system for assessing national preparedness capabilities by, among other things, establishing reporting guidance for state preparedness, but it faces challenges in completing the system and required reports to assess preparedness. While FEMA has developed a project management plan for the new system, the plan does not fully identify milestones and program risks for developing quantifiable metrics necessary for measuring preparedness capabilities. A more complete project plan that identifies milestones and program risks would provide FEMA with greater assurance that it can produce a system to assess capabilities and inform decisions related to improving national preparedness. FEMA's strategic plan for fiscal years 2008-2013 recognizes that each of its components need to develop its own strategic plans that integrate the elements of national preparedness. FEMA's National Preparedness Directorate has yet to develop its strategic plan, but instead plans to use a draft annual operating plan to guide its efforts. This plan does not include all elements of a strategic plan, such as how the directorate will integrate the various elements of the system over time to improve national preparedness. Having a strategic plan would provide FEMA with a roadmap for addressing the complex task of guiding and building a national preparedness system.
Enacted on February 17, 2009, to jump-start the economy and encourage long-term economic growth, the Recovery Act makes more than $780 billion available in supplemental appropriated funds to eligible state, local, and sometimes private recipients. These funds are intended to create and save jobs, spur economic activity, and promote high levels of accountability and transparency in government spending, among other things. We reported that as of September 23, 2009, the Department of the Treasury had outlayed about $48 billion of the estimated $49 billion in Recovery Act funds projected for use in states and localities in federal fiscal year 2009, which ran through September 30, 2009. To ensure that Recovery Act funds supplement rather than replace other spending, the Recovery Act contains requirements that the federal funds not be substituted for state, local, and private support for some aided programs. State and local governments are to be held accountable for how the Recovery Act funds are used to support those programs, and the federal agencies that oversee the programs will be responsible for reviewing states’ compliance with the requirements. These spending requirements include the following: Maintenance of effort. This requirement prohibits recipients from replacing their own spending with federal dollars. In particular, a maintenance of effort provision requires a state or its agency to maintain certain levels of state spending for a certain program. Supplement-not-supplant. This requirement does not hold recipients responsible for maintaining their level of effort in supporting a program, but it does require that funds provided for certain programs serve only to supplement expenditures from other federal, state, or local sources or from funds independently generated by the recipient. But-for test. This requirement ensures appropriate use of Recovery Act funds by requiring recipients to explain how a certain project would not have been implemented during the grant period without the federal grant. This requirement is described as the “but-for test” because, but for the funds, the project would not be supported. Requirements for the programs that are subject to Recovery Act provisions designed to guard against the substitution of federal funds for state funds vary by responsible agency. In general, the supplement-not-supplant requirements for HUD and the “but-for test” for the Department of Commerce are different from the maintenance of effort requirements for the Departments of Education and Transportation. However, only recipients of funds administered by the Department of Education can seek a waiver from the maintenance of effort requirements. (See table 2.) The federal agencies responsible for these programs have issued guidance to recipients on how to implement the maintenance of effort or similar provision requirements. In addition, the Department of Transportation (DOT) continues to issue further guidance to clarify some requirements. To determine whether recipients comply with maintenance of effort and similar provisions, agencies are finalizing state certifications, reviewing applications, and developing plans to review recipients’ compliance with the provisions. However, some agencies and states face challenges in implementing these provisions. For example, the Department of Commerce’s review of applications to ensure that proposed projects would not be feasible without federal funding has been delayed by scheduling and staffing challenges. In addition, officials from several state departments of transportation told us that while they plan to meet their maintenance of effort requirements, decreasing state revenues and budgets pose a challenge to doing so. DOT maintenance of effort provision: The Recovery Act provided about $43.9 billion for highway, transit, and rail projects. This funding is administered through DOT’s operating administrations—the Federal Highway Administration (FHWA), Federal Transit Administration (FTA), and Federal Railroad Administration (FRA). To be eligible for these funds, the Recovery Act specifies that the governor of each state must certify that the state will maintain its current level of highway, transit, and rail spending, among other things. The certification must include a statement of the amount of funds the state plans to spend from state sources from the date of enactment—February 17, 2009—through September 30, 2010, for the types of projects that are funded by that appropriation. The Recovery Act required that the Governor of each state submit this certification no later than 30 days after enactment, or March 19, 2009. The Recovery Act does not provide any waivers or exemptions for the states— for changes in economic conditions, for example—from the maintenance of effort provision. The consequence for a state of not maintaining the certified level of effort is that the state will be prohibited from participating in the redistribution of federal-aid highway obligation authority that will occur after August 1, 2011. According to a DOT official, the department has not made a decision as to whether the Recovery Act requires states to maintain a total level of effort for covered programs or to maintain their level of effort for each covered program. For example, a state might not maintain its certified level of effort for transit but might exceed its certified level of effort for highways, thereby equaling or exceeding its total certified level for transportation. How this question is interpreted has significance for state flexibility in meeting maintenance of effort requirements and for decisions about whether states will be eligible for redistributed federal-aid highway obligations. According to this DOT official, DOT plans to make a decision on this issue by the end of calendar year 2009. DOT guidance to states: Ten days after the enactment of the Recovery Act on February 17, 2009, DOT issued guidance to the states on the FHWA, FTA, and FRA programs, among others, with maintenance of effort provisions. This guidance included the principal requirements for a governor’s certification that a state will maintain its highway, transit, and rail funding efforts, among others. Specifically, this guidance included a sample form that states could complete to satisfy the Recovery Act’s certification requirement. In March 2009, as required by the Recovery Act, all states submitted their certifications; however, many states submitted explanatory certifications—such as a statement that the certification was based on “the best information available at the time”—or conditional certifications, indicating that the certification was subject to conditions or assumptions, future legislative action, future revenues, or other conditions. In response, on April 22, 2009, DOT issued guidance requiring such states to correct those problems through recertification. All states that submitted conditional certifications submitted a second maintenance of effort certification to DOT without conditions. Since April, DOT has issued supplemental guidance to reduce the variations in how states calculate their maintenance of effort certifications. This additional guidance is as follows: On May 13, 2009, DOT issued guidance in response to questions asked by state representatives during a conference call. The majority of this guidance addresses the types of expenditures to include in their maintenance of effort calculation. For example, states should include in- kind contributions from state sources in the planned amount of the expenditures. In June and July 2009, FHWA posted several sets of frequently asked questions to continue to provide states with information on the types of expenditures to include in their maintenance of effort calculations—and therefore reduce the variation in how states calculated their maintenance of effort certifications. For example, states should include planned expenditures from state sources regardless of which agency or political subdivision in the state is responsible for overseeing the expenditure of those funds. However, the maintenance of effort calculation does not include any locally generated funds (i.e., funds produced by local taxes). In September 2009, DOT issued guidance that requires states to include grants-in-aid to local governments as part of their maintenance of effort calculation, which states generally did not count in their previous calculations. This guidance will require some (if not many) states to complete another certification. Of the 17 departments of transportation we spoke with, officials from 13 stated that they had received timely guidance from DOT on maintenance of effort certification and that DOT has generally been responsive to their questions. For example, Mississippi transportation officials told us that they had spoken and met with DOT officials regularly since the enactment of the Recovery Act to discuss Mississippi’s maintenance of effort certification. DOT plans for determining compliance with maintenance of effort provision: DOT continues to work with state governments to finalize their maintenance of effort certifications. As we reported in September 2009, DOT has concluded that the form of the revised state certifications is consistent with its April 22, 2009, guidance, but it is currently evaluating whether the states’ method of calculating the amounts they planned to expend for the covered programs is in compliance with DOT guidance. As of November 30, 2009, FHWA, FTA, and FRA had reached different stages in their reviews. In June 2009, FHWA began to review each state’s maintenance of effort calculation to ensure that the state included the correct planned expenditures for highway investment. For example, FHWA division offices evaluated, among other things, whether the amount certified (1) covered the period from February 17, 2009, through September 30, 2010, and (2) included in-kind contributions, as required. FHWA division staff then determined whether the state certification needed (1) no further action, (2) further assessment, or (3) additional information. In addition, according to FHWA officials, their assessments indicated that FHWA needed to clarify the types of projects funded by the appropriations and the types of state expenditures that should be included in the maintenance of effort certifications. As a result of these findings, DOT issued the June, July, and September 2009 guidance and plans to issue additional guidance on these issues. Our review of FHWA division assessments for the 16 states and the District of Columbia included in this study showed that 6 states needed further assessment. In August 2009, FHWA staff in headquarters reviewed the FHWA division staff findings for each state and proceeded to work with each FHWA division office to make sure their states submit revised certifications that will include the correct planned expenditures for highway investment—including aid to local agencies. FHWA officials said that of the 16 states and District of Columbia that we reviewed for this study, they currently expect to have 12 states submit revised certifications for state highway spending, while an additional 2 states are currently under review and may have to revise their certifications. DOT officials stated that they have not determined when they will require the states to submit their revised consolidated certification. According to these officials, they want to ensure that the states have enough guidance to ensure that all programs with Recovery Act maintenance of effort provisions have completed their maintenance of effort assessments and that the states have enough guidance to ensure that this is the last time that states have to amend their certifications. FTA officials told us the agency plans to review each state’s maintenance of effort calculation to ensure that states included the correct planned expenditures for transit projects covered under the Recovery Act. According to FTA officials, FTA has begun this review, but it is not complete. In October 2009, FTA officials compared each state’s certified transit maintenance of effort with the state funding levels in that state’s plans, specifically the Statewide Transportation Improvement Program (STIP) and Transportation Improvement Program (TIP). FTA found discrepancies between states’ transit maintenance of effort certifications and their STIPs and TIPs, and determined that these state plans did not provide the best mechanism for comparison as it was unclear what types of expenditures were included in the states’ STIP and TIP funding numbers. According to FTA officials, they will work directly with these states to determine the methodology the states used to calculate their transit maintenance of effort amount and, subsequently, decide whether amended certifications are needed. According to FTA officials, they have not established a timeline for completing these reviews. FRA officials told us that the agency plans to review states’ maintenance of effort calculations to ensure that states included the correct planned expenditures for rail projects covered under the Recovery Act. However, the officials said they are still determining the logistics and timeline for this process. Whereas FRA received certifications that 12 states planned to spend state funds on rail projects, FHWA received certifications from 50 states and the District of Columbia that planned to spend state funds on highway projects and FTA received certifications from 38 states and the District of Columbia that planned to spend state funds on transit projects. However, FRA plans to work with other states to determine whether they should have certified that they planned to spend state funds for rail projects. FRA officials said they expect to complete their review by February 2010. FHWA has begun to monitor states’ compliance with their certifications, while FTA and FRA are developing monitoring plans. As of September 2009, FHWA was tracking every state’s spending of state funds for the kinds of projects funded under the Highway Infrastructure Investment appropriation, while FTA and FRA were determining how they would track state spending on covered transit and rail projects. Many of the state departments of transportation we spoke to told us that they are tracking their state expenditures on a monthly basis to determine if their maintenance of effort requirements are being met; however, most said they do not expect to determine whether they met their maintenance of effort levels until sometime between September and October of 2010. Following are examples illustrating these points: FHWA officials stated that FHWA has been using information from Recovery Act reporting requirements to get a sense of whether states are on track to meet their highway certifications. Ninety days after the enactment of the Recovery Act, states were required to report the amounts outlayed under each covered program. Then, states submitted an update to this report 180 days after enactment and are required to submit additional reports 1 year, 2 years, and 3 years after the date of enactment. These reports track the actual aggregate expenditures by each state, among other things. Using the 180 day report, FHWA has been tracking each state’s certified highway maintenance of effort levels against its reported actual expenditures. According to FHWA officials, this exercise provides FHWA with an estimate of each state’s rate of spending on highway investment and has allowed the agency to identify states that appeared to have abnormally high or low spending rates. FHWA officials have worked with such states to understand whether the reasons are acceptable. For example, from this spreadsheet, FHWA officials were able to determine that California’s spending rate on highway investment appeared to be much higher than would have been expected based on the percentage of the maintenance of effort time period that had elapsed. Upon further investigation, including discussions with California, FHWA determined that that the state’s rate was higher because improvements in the bond market had allowed California to issue bonds it had not planned to issue as of the February 17, 2009, maintenance of effort calculation date. The expenditure of these bond proceeds on projects caused a higher expenditure rate than expected. FHWA concluded that California’s explanation of its post-February 17, 2009, decision to issue the bonds was acceptable and it provided a reason for a relatively higher spending rate. In addition to using the Recovery Act reports, FHWA officials stated that FHWA division staff will continue to work closely with states to understand spending rates on highway investment and help states address any potential problems states might have in complying with their certified highway spending levels. FHWA officials stated they will not be able to make a final determination as to whether states have fully complied with their highway maintenance of effort levels until after the maintenance of effort period concludes on September 30, 2010. FTA and FRA officials told us they do not know when they will begin to determine states’ compliance with their transit and rail maintenance of effort certifications. According to FTA and FRA officials, to determine state compliance, they first need to assess each state’s transit and rail maintenance of effort certifications. FTA and FRA officials told us that by September 30, 2010, they will work with each state to determine its spending on eligible transit and rail projects and thus determine if each state has complied with its transit and rail certifications. State challenges in meeting DOT maintenance of effort requirement: Most states we spoke with are committed to trying to meet their maintenance of effort requirements, but some states are concerned about meeting the requirements. As we have previously reported, states face drastic fiscal challenges, and most states are currently estimating that their fiscal year 2009 and 2010 revenue collections will be well below previously estimated amounts. In the face of these challenges, some states told us that meeting the maintenance of effort requirements over time poses significant challenges. In addition, according to the DOT Deputy Assistant Secretary for Transportation Policy, the department recognizes that many states may not be able to maintain the level of effort specified in their certifications, given the continual decline in their economy. If a state is not able to maintain its certified level of effort, it will not be allowed to participate in the redistribution of federal-aid highway obligations that will occur after August 1, 2011. In August 2008, states received about $1.2 billion through the federal-aid highway redistribution. By way of context, this sum represents about 5 percent of the nearly $27 billion states received through the Recovery Act, or about 3 percent of the roughly $35 billion states receive annually through the regular Federal Aid Highway Program. However, of the 17 departments of transportation we spoke with, officials from 15 stated that this prohibition on participating in the fiscal year 2011 redistribution provides an incentive for their state to meet its certified maintenance of effort level. For example, Ohio officials stated they have received an average of $43 million in redistributed obligation authority over the past 3 years, and they intend to meet the maintenance of effort levels and receive additional funding. In addition, according to Georgia officials, the potential addition of $40 million in redistributed funds is an incentive for the state to meet its requirements. Although the states we spoke with are committed to trying to meet the maintenance of effort requirements, 7 state departments of transportation told us the current decline in state revenues creates major challenges in doing so. For example, Iowa, North Carolina and Pennsylvania transportation officials said that a decline in state gas tax and other revenues, used for state and state-funded local highway projects, may make it more difficult for them to maintain their levels of transportation spending. In addition, Georgia officials stated that the current decline in the state’s gas tax revenues is a challenge to meeting its certified level of effort. Lastly, Mississippi and Ohio transportation officials stated that if their state legislatures reduce their respective department’s budget for fiscal year 2010 or 2011, the department may have difficulty maintaining its certified spending levels. Education maintenance of effort provision: The Recovery Act created the State Fiscal Stabilization Fund (SFSF), which included approximately $48.6 billion to award to governors by formula and another $5 billion to award to states or school districts as competitive grants. The Recovery Act requires that each state meet maintenance of effort requirements for elementary and secondary (K-12) education and public institutions of higher education (IHE) as a condition of receiving SFSF funds. The Department of Education (Education) required governors in their SFSF application to provide assurances that their state will meet maintenance of effort requirements or that it will be able to comply with waiver provisions. Specifically, in order to meet maintenance of effort requirements, a state must maintain state support for K-12 education and IHEs at least at fiscal year 2006 levels in fiscal years 2009, 2010, and 2011. After maintaining state support at no less than fiscal year 2006 levels, states must use education stabilization funds to restore state funding to the greater of fiscal year 2008 or 2009 levels for state support to K-12 school districts and IHEs in fiscal years 2009 through 2011. Education guidance to states: Education disseminated several guidance documents to states in the spring and summer of 2009 to assist them in defining their maintenance of effort amounts. In determining, for maintenance of effort purposes, the state level of support for K-12 education in fiscal year 2006, Education guidance said states must include funding provided through their primary formulas for distributing funds to school districts. However, Education also allowed states some flexibility in choosing the basis they use to measure maintenance of effort, as well as in what they include or exclude in their maintenance of effort definition. For example, state support for education can be measured on the basis of either aggregate or per-pupil expenditures. Measuring on a per-pupil basis gives more flexibility to states with forecasts of declining student enrollment because they can reduce aggregate state support for education but still meet maintenance of effort requirements on a per-pupil basis. Also, states have the flexibility to include or exclude additional state funding such as state appropriations to local governments that support K- 12 education or other support that is not provided through primary funding formulas. By not including education spending beyond funding distributed through primary funding formulas in their definitions of maintenance of effort, states maintain flexibility to reduce expenditures on other categories of education spending and not affect their ability to comply with the maintenance of effort requirement. For IHEs, states have some discretion in how they establish the state level of support, with the provision that they cannot include support for capital projects, research and development, or amounts paid in tuition and fees by students. If states fail to meet the maintenance of effort requirements for K-12 education or IHEs, Education’s guidance directed states to certify that they will meet requirements for receiving a waiver—that is, that total state revenues used to support education would not decrease relative to total state revenues. Because the measure used to determine eligibility for a waiver from maintenance of effort requirements—state revenues used to support education—can be defined differently from the maintenance of effort measure—state support for education—states may have to track both measures to make sure they can meet their assurances. States that need a waiver are directed to submit a separate waiver application to Education. While states generally are required to maintain state spending at or above fiscal year 2006 levels of state support for education, we found that five states and the District of Columbia reported in the approved applications we reviewed that they would maintain state support above that level in fiscal years 2009 and 2010. This gives them flexibility to reduce state support in fiscal year 2010 to an amount below the fiscal year 2009 level or in fiscal year 2011 below the fiscal year 2009 or 2010 level and still meet the maintenance of effort requirement. Arizona, for example, reported it would maintain state support in 2009 at about $500 million above the fiscal year 2006 levels. Because Florida reported it could not meet maintenance of effort requirements in fiscal year 2009, the state has applied for a waiver. While New York did not provide estimates of state support for fiscal year 2009, 2010, or 2011 in its application, its governor provided an assurance that the state would maintain state support for education at or above its fiscal year 2006 maintenance of effort level. Table 3 shows, for the states we reviewed and the District of Columbia, the level of state support for elementary and secondary education as required by the state’s maintenance of effort calculation. Specifically, the table provides the fiscal year 2006 maintenance of effort level for the states we reviewed and the anticipated amount of state support for education for fiscal years 2009 and 2010 included by states in their application for SFSF. Most of the states we reviewed reported additional education spending beyond what was included in maintenance of effort requirements. For example, North Carolina officials told us that in fiscal year 2009 the state spent about $2 billion on K-12 education programs above its state support for K-12 education based on maintenance of effort calculations. Since these funds do not count as state education support for maintenance of effort determinations, states can reduce these funds without affecting their compliance with SFSF maintenance of effort requirements. However, these other funds would be factored into a state’s revenues to support education as a percentage of total state revenues if the state needs to request a waiver from maintenance of effort requirements. Education officials reported they have already received several revised SFSF applications and they expect that the majority of all states will resubmit their SFSF application because most states used their governor’s budget proposal, as allowed, in their original application, which often differs from final enacted spending levels. While every state, as part of its initial application for SFSF, had to assure it would either meet the maintenance of effort levels or waiver requirements, Education directed states to amend their SFSF applications to reflect any final budget changes and, in the amended applications, provide a final assurance that they will meet maintenance of effort levels. Specifically, according to Education guidance, a state must amend its SFSF application if there are changes to the reported levels of state support for education that were used to determine the maintenance of effort amount or to calculate the amounts needed to restore state support for education to the fiscal year 2008 or 2009 level. Education officials reported they are continually reviewing the resubmissions to ensure they contain the required assurances from the governor and comply with other requirements. Of the 6 states and the District of Columbia we reviewed, 6 have either resubmitted or plan to resubmit their SFSF application because their level of support for fiscal year 2006 or 2009 had changed. Two states we reviewed have lowered their calculated fiscal year 2006 level of education support for maintenance of effort purposes. For example, North Carolina officials told us they revised their fiscal year 2006 level of support for maintenance of effort determination from nearly $7 billion down to about $5.3 billion, based on guidance from Education, to reflect a change made in the definition of the state’s primary funding formula in the state’s fiscal year 2010 budget legislation, so that the state has comparable measures of support in both years. California amended its application in May 2009 because the state had originally included about $2 billion in one-time funds that were actually appropriated in fiscal year 2007 and reduced its maintenance of effort level of support for fiscal year 2006 by this amount. California amended its application again in August 2009 to change its maintenance of effort level from an aggregate measure to a per- pupil basis. California’s resubmitted application did not state why the change to a per-pupil basis was made. Officials from California did not offer an explanation of the changes. Education officials told us they are allowing states to revise their fiscal year 2006 maintenance of effort support levels and will review them to see that they are in compliance with Recovery Act requirements and Education’s guidance. However, current guidance from Education does not direct states to include an explanation for changes made to state fiscal year 2006 maintenance of effort support levels and calculations in their resubmitted application. Rather, states are directed to provide information about what is included in its measure of state support for education. Consequently, revised applications report maintenance of effort support levels and provide information about how states are defining state support, but it may not be readily apparent what funds have been added or removed from one application to the next. For example, California’s August revision shows that maintenance of effort is defined on a per-pupil basis, but there is no explanation why they changed this basis or how it compares to its previous maintenance of effort measure. Education officials said adjustments are being made to fiscal year 2006 maintenance of effort levels because, as state fiscal year 2009 budgets become final, states are attempting to develop equivalent information for both their fiscal year 2006 levels of support calculation and their calculations for fiscal year 2009. Also, according to Education officials, states were initially unsure of precisely what information to include in their maintenance of effort calculations because SFSF is a new program, and, now, given more time, they are making adjustments to their maintenance of effort calculations. Education officials told us that once states submit their final audited fiscal year 2009 figures, they will not be allowed to change their fiscal year 2006 maintenance of effort calculations again. Education officials told us that four states—Florida, New Jersey, Rhode Island, and South Carolina—have requested maintenance of effort waivers for fiscal year 2009. Florida has requested Education waive maintenance of effort requirements for elementary and secondary education, and New Jersey has requested Education waive maintenance of effort requirements for public IHEs. Education officials told us states will get final waiver approval in the form of a written letter of approval after the states submit final maintenance of effort amounts to Education. Education officials also told us they will work closely with states on a case–by-case basis to ensure that the information submitted complies with the waiver criteria under the Recovery Act. While four of the six states and the District of Columbia we reviewed reported maintaining state support for education above the required fiscal year 2006 maintenance of effort level, a recent Alert Memorandum by Education’s Inspector General shows that some states have lowered state support for education while continuing to meet maintenance of effort requirements. The report noted that Education agreed with this finding and took steps to discourage states from reducing such support. For example, in the proposed application requirements for the Race to the Top program—a competitive grant program under the Recovery Act providing up to $4.35 billion in funding to states for education reform efforts— Education said that, in making award determinations, it would take into consideration whether states reduced their percentage of total revenues used to support public education for fiscal year 2009 as compared to fiscal year 2008. The Education Inspector General’s recommendations to Education included that it should implement a process to track state support for elementary and secondary education, as well as for public IHEs, to determine the extent to which state funding of public education is being reduced. Education plans for determining compliance with maintenance of effort provision: Education has begun to draft a monitoring plan to oversee and enforce state compliance with maintenance of effort requirements under SFSF. Because SFSF is a new program established under the Recovery Act, Education has yet to finalize monitoring plans and processes. Education officials said they are developing an approach to monitor SFSF maintenance of effort that will include site visits to states to review state documentation of compliance with maintenance of effort requirements. In the interim, Education officials said they are taking several steps both to monitor information they are receiving from states and to provide technical assistance to states. For example, according to Education officials, prior to approving SFSF awards, Education reviewed each state’s application to ensure the state complied with statutory requirements to receive the funds. Education has not yet released guidance to states on the information states need to collect to prove they have met their required maintenance of effort level. Education officials told us that once the monitoring plan is finalized, the guidance will be released to states. However, previously released guidance to states on maintenance of effort instructed that states must maintain adequate documentation that substantiates the levels of state support the state has used in making maintenance of effort calculations. Officials in most states we reviewed for this report told us they plan to document that the state met its maintenance of effort requirements through its state budget and accounting procedures. They said these data would be available when accounting for fiscal year 2009 is closed or finalized. Education has authority under the General Education Provisions Act to take various actions against states that fail to the meet maintenance of effort requirements—even in future years. For example, Education could recover funds if a state is found to be out of compliance with the maintenance of effort requirements. However, Education officials told us they have been working closely with states to ensure compliance with maintenance of effort provisions in an effort to ensure that no state is out of compliance. HUD supplement-not-supplant provision: The Recovery Act provided $4 billion for the Public Housing Capital Fund, a program administered by HUD for the capital and management activities of public housing agencies—$3 billion to be allocated by formula and $1 billion to be awarded by competition. HUD allocated nearly $3 billion to public housing agencies using the same formula for amounts made available in fiscal year 2008 and obligated these funds to housing agencies in March 2009. Then, in September 2009, HUD awarded nearly $1 billion to public housing agencies based on competition for priority investments, including investments that leverage private sector funding or financing for renovations and energy conservation retrofitting. The Recovery Act requires that these funds be used to supplement and not supplant expenditures from other federal, state, or local sources or funds independently generated by the grantees. In contrast to the DOT and Education programs that distribute Recovery Act funds to the states, the Public Housing Capital Fund distributes grants directly to public housing agencies. As a result, the Recovery Act does not have state certification, waiver, or noncompliance provisions as part of the Public Housing Capital Fund’s supplement-not-supplant provision. HUD information to housing agencies: Public housing agencies were to sign an amendment to their annual contributions contracts (ACC), which includes a supplement-not-supplant provision, in order to receive the Recovery Act formula funds. All but 13 of the 3,134 housing agencies offered formula grants under the Recovery Act signed their ACC amendments, enabling HUD to obligate the formula grant funds to them. HUD provided information to housing agencies through a notice and questions included in two sets of frequently asked questions to clarify the supplement-not-supplant provision in the Recovery Act. According to this information, public housing agencies with Public Housing Capital Fund formula grants are to avoid using Recovery Act funds to supplant funds from other sources that have already been obligated when, for example, an agency is accelerating or expanding a project that is already under way. One HUD official stated that the distinction between funds that have already been obligated and funds that have not yet been obligated should be clear to housing agencies. If they had already obligated non-Recovery Act funds for a project, they could not replace those funds with Recovery Act funds. In addition, the applications for competitive grants included a certification by the housing agencies that they would not use Recovery Act grant funds to supplant other federal, state, or local funds, including tax credit equity, loans, or other nonpublic housing funds. The notice of funding availability also instructed applicants to provide sufficient detail in their project description about how they planned to ensure that Public Housing Capital Funds received as competitive grants would not supplant funds from other sources. In order to receive the competitive grant funds, HUD also had public housing agencies sign a separate ACC amendment that included a supplement-not-supplant provision. HUD plans for determining compliance with supplement-not-supplant provision: HUD officials stated that monitoring compliance with the supplement-not-supplant provision was included in ongoing monitoring efforts for formula funds provided under the Recovery Act. Specifically, HUD is implementing strategies for monitoring all public housing agencies that received Capital Fund formula grants under the Recovery Act. HUD field staff are using checklists that contain questions about supplementing and not supplanting other sources of funds. These staff are conducting remote reviews (that is, reviews that do not involve visits to the agency) of all 3,121 housing agencies that received Recovery Act funds using these checklists, as well as on-site reviews of 172 housing agencies designated as troubled performers and of 533 nontroubled housing agencies identified through a risk-based strategy. Remote reviews are to focus on grant initiation activities, the annual statement, environmental compliance, procurement, and Recovery Act grant performance, including compliance with the supplement-not-supplant provision. Specifically, the remote review questions related to supplement-not-supplant bring attention to projects that use both Recovery Act funds and other funds and flag them for further review to ensure Recovery Act funds are supplementing the other funds. On-site reviews, which HUD teams conduct on the premises of housing agencies, are to include following up on outstanding items from the remote review. In addition, on-site reviews are to assess whether the housing agency is appropriately and effectively administering its Recovery Act Capital Fund grant. HUD officials stated that all remote reviews of troubled housing agencies have been completed, as have on-site reviews of troubled agencies deemed high risk and medium risk. On-site reviews of troubled agencies deemed low risk are ongoing and will be completed by December 31, 2009, according to HUD officials. HUD officials stated that remote and on-site reviews of nontroubled housing agencies are under way. They said the remote reviews will be completed by January 15, 2010, and the on-site reviews will be completed by February 15, 2010. The results of the reviews of both troubled and nontroubled housing agencies are to be evaluated and summarized in the coming months. In addition to these monitoring strategies, HUD officials pointed to other opportunities to oversee housing agencies’ compliance with the supplement-not-supplant requirement. For example, public housing agencies submitted annual statements outlining their planned uses of Recovery Act funds before being granted access to the funds, which HUD reviewed and approved. In addition, HUD officials told us that development projects are the types of projects that may rely on financing from multiple sources, increasing the risk that a portion of the financing might be supplanted by Recovery Act funds. However, housing agency plans that include funds for development activities trigger a special review by HUD staff, which requires additional levels of approval. As part of that review, the staff examine the plans for funding from outside the Capital Fund to ensure the housing agency is not using Recovery Act funds to supplant other funds. HUD’s Office of Inspector General is also conducting reviews of housing agencies’ capacity for administering Recovery Act funds. One recent report raised questions about whether one housing agency had used Recovery Act funds to supplant other funds. HUD officials that administer the Capital Fund stated they are investigating this case to make a separate determination. HUD officials said they are currently developing a strategy for monitoring the competitive grants that were awarded in September 2009. Monitoring compliance with the supplement-not-supplant provision will be part of that effort. According to HUD officials, in reviewing applications, HUD staff were to examine applicants’ plans for ensuring they would not supplant other funds. The monitoring strategy will follow up on the specific commitments each housing agency made in its application, including compliance with what each housing agency said it would do to ensure it was not supplanting other funds. HUD officials said they are currently reviewing the different projects to be funded by Capital Fund Recovery Competition grants to ensure that the appropriate HUD offices are involved in developing and implementing the monitoring strategy. HUD officials told us they will determine consequences for housing agencies found to be supplanting funds on a case-by-case basis. Possible consequences include recapturing funds, requiring reimbursement of Recovery Act funds from sources that were supplanted, and halting work on projects. Several housing agency officials noted that the potential consequences of failing to comply with the supplement-not-supplant provision were severe enough for them to take care in selecting projects rather than be found in violation of the provision. Housing agency officials we spoke with at 27 agencies generally did not see supplanting as a major challenge for their housing agency and have not had trouble abiding by the requirement. Officials at several housing agencies noted that because they had many more projects that needed to be done than could be completed with only their regular Capital Fund grants, it was not difficult to identify projects that did not have any other funding. For example, the Boston Housing Authority selected some projects from the second year of its 5-year plan that could now be started earlier than previously planned. Officials from the Housing Authority of LaSalle County in Illinois stated that the Recovery Act funds allowed them to complete more projects from their 5-year plan in less time than they would have completed with regular Capital Fund dollars alone. In addition, some housing agency officials told us they were keeping track of their Recovery Act funds separately from their regular Capital Fund grants in order to make clear that the Recovery Act funds were not supplanting other funds that had already been obligated. Furthermore, Atlanta Housing Authority officials said they went so far as to closely examine their capital improvement plans and documents for 2008 and 2009 looking for evidence that they had previously planned to use other funds for any of the proposed Recovery Act projects. They found two projects they thought might raise questions and decided to pay for them with other funds. Other housing agency officials stated that annual statements and 5-year plans are reviewed multiple times—by the public, by the housing agency’s board, and by HUD—and that these layers of review serve as a check to ensure that supplanting does not occur. NTIA “but-for” provision: The Recovery Act provided $4.7 billion for the Broadband Technology Opportunities Program (BTOP), administered by the Department of Commerce’s National Telecommunications and Information Administration (NTIA). BTOP provides grants for infrastructure projects to support the deployment of broadband infrastructure to unserved and underserved areas, to enhance broadband capacity at public computer centers, and to encourage sustainable adoption of broadband service. To be eligible for a BTOP grant, an applicant must, among other things, pass the “but-for test,” meaning that the applicant must demonstrate that, but for federal assistance, the project would not have been implemented during the grant period. NTIA guidance to applicants: NTIA provided guidance to applicants on how to comply with this provision through their applications for BTOP. Applications and supporting documentation were due by August 20, 2009, for the first round of funding. NTIA’s Notice of Funds Availability (NOFA) for BTOP grants, issued on July 9, 2009, requires grant applicants to provide documentation demonstrating that the project would not have been implemented during the grant period without federal grant assistance. This documentation includes, but is not limited to, a denial of funding from a lending institution or the Rural Utilities Service (RUS), a current fiscal year budget that shows a lack of sufficient funding for the project, or a business case that shows the project’s viability depends on grant financing. In addition, the July 31, 2009, grant guidelines for BTOP reiterate that grantees must submit the above documentation to demonstrate that the project would not have been implemented during the grant period without federal assistance. Furthermore, NTIA and RUS held 10 informational workshops, throughout the country to explain the program and the application process and to answer questions. At each of these events, NTIA highlighted the “but-for” requirement for attendees. Also, NTIA’s Web site includes a list of frequently asked questions about BTOP grants that does not provide information on the “but-for test”; according to NTIA officials, this information does not appear because applicants did not frequently inquire about it. NTIA plans for determining compliance with “but-for” provision: NTIA originally planned to award the $4.7 billion in BTOP grant funding through three rounds of applications. However, the agency has combined the second and third rounds in order to expedite the process of awarding grants, as well as give applicants and the agency additional time to prepare and review proposals for the second round. The agency has begun the second phase of a two-step rolling process for reviewing applications for its first round of funding; this second phase includes determining whether applicants have adequately documented that the project would not have been implemented without Recovery Act funds. In the first step of the review process, NTIA will evaluate and score applications based on the criteria set forth in the July 9 Notice of Funds Availability, such as project purpose and project viability. During this initial step, the agency will review BTOP applications and will select those applications that will proceed to the second step. The second step—due diligence—involves requesting extra documentation from applicants to confirm and verify information contained in an application, including documentation of the “but-for” test. This two-step process is designed both to reduce the burden of providing unnecessary documentation for applicants that do not meet the basic project purpose and viability criteria and to meet NTIA’s need to efficiently evaluate applications. We recently reported that NTIA and RUS face scheduling and staffing challenges that have delayed the agency’s review of applications. In order to award the $4.7 billion appropriated for BTOP by September 30, 2010, NTIA and RUS must, within 18 months, establish their respective programs, solicit and evaluate applications, and award funds. In addition under BTOP, NTIA will for the first time award grants to commercial entities. The compressed time frame is complicated by the fact that NTIA and RUS also face an increase in the number of applications that they must review and evaluate in comparison to similar programs. BTOP involves more applications and far more funds than the agency formerly handled through other programs (see fig. 1). For example, the 1,770 applications that NTIA intends to review in the first application round for BTOP far exceeds the annual average of 838 applications for the largest grant program the agency previously administered—the Telecommunications Opportunities Program. Furthermore, the $4.7 billion that NTIA must award for BTOP is more than three times as much as the about $1.5 billion that the agency has heretofore awarded annually for all other grant programs combined. NTIA’s initial risk assessment indicated that a lack of experienced and knowledgeable staff was a key risk to properly implementing the program in accordance with the priorities of the Recovery Act. Due to limited staff, NTIA may have an inability to thoroughly review applications and therefore the agency risks funding projects that might not meet the objectives of the Recovery Act’s “but-for” test. In its fiscal year 2010 budget request to Congress, NTIA estimated that it would need 30 full-time-equivalent staff in fiscal year 2009 and an additional 40 staff for fiscal year 2010 to review applications and administer BTOP. To address this issue, we recently recommended that the Departments of Commerce and Agriculture develop contingency plans to ensure sufficient resources for oversight of Recovery Act-funded projects beyond fiscal year 2010, among other things. Officials from both departments have agreed with our recommendation and plan to take all to take all appropriate steps to address our concern. appropriate steps to address our concern. While NTIA originally anticipated that it would begin announcing awards on or about November 7, 2009, the agency now estimates that it will begin in December 2009 and will not finish awarding the first round of grants until February 2010. NTIA is taking several steps to address these challenges. According to NTIA officials, the two-step application review process conserves scarce staff resources by screening applications and eliminating those that do not meet the program’s criteria, thereby reducing the number of applications subject to a comprehensive review. NTIA has also enlisted the aid of contractors and independent experts to review applications and announced that it will award all funds in two rounds of applications, rather than three rounds as originally anticipated. We recently reported that, while these steps address some challenges, the upcoming deadline for awarding funds may pose risks to the thoroughness of the application evaluation process. In particular, NTIA may lack time to apply lessons learned from the first funding round and to thoroughly evaluate applications for the remaining rounds. Maintenance of effort and similar provisions are important mechanisms for helping ensure that federal spending achieves its intended effect. Without such spending provisions, recipients may simply substitute federal funds for some of their planned spending for a given program. Therefore this would not increase the overall spending for the program. While these spending provisions are important, our review illustrates the administrative and fiscal challenges in implementing them, both from federal and state perspectives. More than 9 months have elapsed since the passage of the Recovery Act, but federal and state officials have not completed key steps in the implementation of the maintenance of effort or similar provisions, including finalizing state transportation certifications and ensuring transparency of state education support levels, for the covered programs under the Recovery Act with maintenance of effort provisions. These challenges, coupled with the varying requirements of the maintenance of effort and similar provisions we reviewed, raise questions as to whether the provisions will achieve their intended purpose. The SFSF funds provided under the Recovery Act are intended to play a critical role in helping state and local governments stabilize their budgets by minimizing budgetary cuts in education. The maintenance of effort requirement written into the Recovery Act requires states to maintain a minimum level of state spending on education while addressing educational reforms. The Department of Education has taken important steps to ensure that states are maintaining their maintenance of effort levels. For example, the department provides technical assistance and reviews state applications to ensure compliance with legal requirements. Education does not currently require states to explain why their maintenance of effort levels change—even when states change their fiscal year 2006 maintenance of effort levels, which serve as the states’ baseline level for the maintenance of effort requirement in the law. Given that states’ changes to their fiscal year 2006 maintenance of effort levels affect how much funding states are required to provide to education, providing explanations of why the changes occurred enhances transparency. Since some states have planned to decrease their fiscal year 2006 maintenance of effort funding by over a billion dollars, the public and policymakers alike would benefit from knowing why the decreases occurred and what funding was impacted by the change. Although Education reviews maintenance of effort changes with state officials, it is difficult to monitor changes effectively without explanations. Given the large investment in funding involved, efforts to reinforce transparency could play a crucial role in ensuring that states fulfill their responsibility to maintain state spending on Education. We recommend that the Secretary of Education take further action to enhance transparency by requiring states to include in their State Fiscal Stabilization Fund applications an explanation of the changes and why they want to change their 2006 maintenance of effort calculations or levels when they resubmit these applications to the Department of Education. We provided copies of our draft report to DOT, Education, HUD, and Commerce for review and comment. All four agencies provided e-mail comments. DOT agreed with our findings and provided technical comments on our discussion of FHWA’s plans for finalizing state compliance with maintenance of effort levels. We incorporated DOT’s technical comments where appropriate. Education agreed with our recommendation that it take further action to enhance transparency by requiring states to include an explanation for why they want to change their fiscal year 2006 maintenance of effort calculations or levels when they resubmit applications for the SFSF. Education noted that it has already asked each state amending its SFSF application with regard to level of support to provide a description of the reasons it is changing its level of support for any year covered, and a table showing the revisions across years. In addition, Education officials reported they are revising guidance on amending an application and applying for a maintenance of effort waiver to indicate that a state is expected to provide such a description of its reasons for changing its data on the level of support for any year covered by the SFSF maintenance of effort requirements. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Director of the Office of Management and Budget, and the Departments of Commerce, Education, Housing and Urban Development, and Transportation. In addition, we are sending sections of the report to the officials in the 16 states and the District of Columbia covered in our review. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about issues in this report related to the U.S. Departments of Commerce or Transportation, please contact A. Nicole Clowers at (202) 512-2834 or clowersa@gao.gov; for questions about U.S. Department of Education issues, please contact Cornelia Ashby at (202) 512-8403 or ashbyc@gao.gov; and for questions about U.S. Department of Housing and Urban Development issues, please contact Mathew Scirè at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To determine the programs in the American Recovery and Reinvestment Act of 2009 (Recovery Act) with maintenance of effort or similar requirements, we searched the Recovery Act for maintenance of effort and similar provisions. From this search, we identified 16 programs in the Recovery Act with such provisions. These programs received a total of about $106.8 billion in appropriations. (See table 4.) We did not include any program with a pre-existing maintenance of effort or similar requirement, and we did not factor in language applying to programs that fall under a maintenance of eligibility clause. Twelve federal agencies administer these 16 programs. To identify those agencies that received a significant amount of Recovery Act appropriations and whose programs are subject to a maintenance of effort or similar provision, we selected the agencies that received Recovery Act appropriations totaling $4 billion or more. This threshold captures about 94 percent of the total Recovery Act appropriations—about $100.5 billion—for programs with maintenance of effort or similar provisions. Eight programs—administered by the Departments of Commerce, Education, Housing and Urban Development, and Transportation—met our selection criteria and in total received Recovery Act appropriations of about $100.5 billion. Within the Department of Transportation (DOT), four agencies—the Federal Highway Administration (FHWA), the Federal Railroad Administration, and the Federal Transit Administration—administer five of these programs. To describe the maintenance of effort or similar provisions that apply to these eight programs, we reviewed and analyzed the Recovery Act. To describe the steps that agencies have taken to implement these requirements, we reviewed guidance from the six agencies, including notices published by the Departments of Commerce and Housing and Urban Development (HUD) on funding availability, guidance issued by DOT in February, May, and September 2009 on maintenance of effort requirements to governors and FHWA division offices, and the Department of Education’s guidance to states on the State Fiscal Stabilization Fund program’s maintenance of effort requirements. In addition, we interviewed officials at these departments about their guidance and plans, if any, to issue supplemental guidance on maintenance of effort or similar requirements. To determine how responsible federal agencies are determining whether recipients meet maintenance of effort or similar requirements, we reviewed documents on actions taken by the Departments of Commerce, Education, Housing and Urban Development, and Transportation to monitor state certifications and grant applications. Specifically, we reviewed all 50 states’ and the District of Columbia’s certification applications to the Secretary of Transportation; State Fiscal Stabilization Fund applications from 6 states and the District of Columbia; and nonprofit organizations’ grant applications to the Broadband Technology Opportunities Program. In addition, we reviewed the procedures that these departments used to ensure that the state certifications and grant applications met the maintenance of effort or similar requirements. We also interviewed officials from these departments about their plans for implementing and overseeing states’, public housing agencies’, and other grantees’ compliance with the maintenance of effort or similar requirements in the Recovery Act. Additionally, we interviewed these agencies about their plans to address noncompliance with these requirements. We also obtained information from selected state departments of education and transportation on their use of the guidance issued by the Departments of Education and Transportation on maintenance of effort requirements—specifically, the state certification process. In addition, we gathered documents from and interviewed state education and transportation officials on the methodology they used to calculate their spending levels and plans to monitor their compliance with the maintenance of effort requirements. We selected the states based on our ongoing Recovery Act bimonthly reporting effort. This effort includes a core group of 16 states and the District of Columbia that we plan to follow over the next few years to provide an ongoing longitudinal analysis of the use of funds provided in conjunction with the Recovery Act. These 16 states and the District of Columbia contain about 65 percent of the U.S. population and are estimated to receive collectively about two-thirds of the intergovernmental federal assistance funds available through the Recovery Act. From these 16 states and the District of Columbia, we obtained information from 17 departments of transportation, 7 departments of education, and 27 public housing agencies in 10 states. These states were selected from our 16 states based on the time constraints of our ongoing Recovery Act bimonthly reporting effort. A. Nicole Clowers, (202) 512-2834 or clowersa@gao.gov (U.S. Departments of Commerce and Transportation issues); Cornelia Ashby at (202) 512-8403 or ashbyc@gao.gov (U.S. Department of Education issues); and Mathew Scirè at (202) 512-8678 or sciremj@gao.gov (U.S. Department of Housing and Urban Development issues). In addition to the contacts named above, Sara Vermillion, Assistant Director; Donald Brown; Jay Cherlow; Alexander Galuten; Byron Gordon; Sonya Harmeyer; Cheryl Harris; David Hooper; John McGrail; Sara Ann Moessbauer; Paul Schmidt; and Carrie Wilks made key contributions to this report.
To help prevent the substitution of federal funds for state, local, or private funds, the American Recovery and Reinvestment Act of 2009 (Recovery Act) contains maintenance of effort and similar provisions requiring that recipients maintain certain levels of spending for selected programs. This report provides information on selected programs in the Recovery Act with maintenance of effort or similar provisions, the guidance federal agencies have issued to implement these requirements, and how responsible federal agencies are determining whether recipients meet these requirements. To conduct this work, GAO identified eight programs in the Recovery Act that contain a new maintenance of effort or similar provision; account for at least $4 billion in appropriations by agency; and collectively account for about $100.5 billion of the $106.8 billion in Recovery Act appropriations with these provisions. The eight programs with maintenance of effort or similar provisions span the areas of education, highway, housing, rail, telecommunications, and transit. The specifics of each provision vary by responsible agency, such as whether a state must certify the amount of funding it will maintain, whether waivers are allowed, and the consequences (if any) of not meeting the provisions. The federal agencies responsible for these eight programs have issued guidance to states and other recipients on how to implement the maintenance of effort or similar provision requirements. However, federal and state officials have not completed key steps in implementing these provisions because of administrative and fiscal challenges. (1) The Department of Transportation (DOT) has begun to assess the highway and transit levels that states certified to maintain; however, it has not estimated a date for completing this assessment and has not finalized plans for determining states' compliance with their transit certifications. Furthermore, according to a DOT official, the department has not made a decision as to whether the Recovery Act requires states to maintain a total level of effort for covered programs or to maintain their level of effort for each covered program. Officials from several state departments of transportation told GAO that while they plan to meet their maintenance of effort requirements, decreasing state revenues and budgets pose a challenge to doing so. (2) The Department of Education (Education) has begun to draft a monitoring plan to oversee and enforce state compliance with maintenance of effort requirements under the State Fiscal Stabilization Fund. Because the State Fiscal Stabilization Fund is a new program under the Recovery Act, Education has yet to finalize monitoring plans and processes. In addition, Education has not issued guidance to states on how to document that they met their required maintenance of effort level. (3) Department of Housing and Urban Development (HUD) officials said they are monitoring Capital Fund formula grants through ongoing efforts. Officials further stated that they are still developing a strategy for monitoring Capital Fund competitive grants. (4) The Department of Commerce's (Commerce) review of broadband grant applications for funding has been delayed because of scheduling and staffing challenges. In particular, the broadband grant program involves more applications and far more funds than the agency formerly handled, raising concerns whether the department has sufficient staff resources to implement the program in accordance with Recovery Act priorities. While Commerce originally anticipated that this review would be completed by November 7, 2009, the agency now estimates that it will not complete this review process and award the first round of grants until February 2010.
Explosives, like all hazardous materials (hazmat), are subject to regulations to ensure safe handling and transportation, among other things. Hazmat regulations are coordinated with international standards and generally govern the labeling, packaging, and transportation of hazmat in commerce. Explosives are one of nine classes of hazmat. In order to be transported, explosives must be assigned a classification. The classification, which includes a number that denotes the risk level of the explosive (from most to least hazardous), dictates associated transportation requirements, such as by which transportation modes the explosives can travel and how they are packaged. For example, class 1.1 explosives, which pose a mass explosion hazard, cannot travel by aircraft but can travel by truck. Meanwhile, certain class 1.4 explosives, which pose a minor explosion hazard, that meet specific requirements can travel by aircraft or U.S. Postal Service. Classifications also include “compatibility groups” that denote which explosives can be transported together. For example, the regulations do not allow blasting detonators, which are used to trigger an explosive device, to be transported in the same truck as primary explosive substances. Unlike some other classes of hazmat that can be self-classified (meaning the shipper classifies the material), in order to be classified, explosives must be first examined by one of six PHMSA-approved third party test labs. The explosives manufacturer, which ultimately submits an application to PHMSA for classification, first selects and hires one of the test labs and makes a sample of the explosive available to the test lab for examination. The test lab uses international standards to test the material and to recommend a classification. PHMSA then reviews the manufacturer’s application, including the test lab’s report and recommended classification, and approves a classification for the explosive (see fig. 1). In 2010 and 2014, the DOT OIG reported weaknesses in PHMSA’s management and oversight of the approvals process. For example, in 2010, the DOT OIG raised questions about the effectiveness of PHMSA’s oversight and found that PHMSA had not inspected any of the test labs in 10 years and that test labs did not always submit annual reports as required. In 2014, the DOT OIG reported that PHMSA had addressed these issues. However, the 2014 report found PHMSA evaluation forms missing for many explosives classification reviews, a situation that DOT OIG noted was an internal control weakness. As a result, the DOT OIG recommended PHMSA require the use of evaluation forms to document its review of explosive classification applications. According to a DOT OIG official, PHMSA effectively addressed this recommendation in 2015. PHMSA’s oversight of the classification of new explosives includes two key parts—(1) approving and monitoring the test labs and (2) reviewing and approving the manufacturers’ applications for classification of a new explosive and the labs’ test classification recommendations. PHMSA has several activities to approve and oversee test labs, but its efforts to ensure consistency are limited by PHMSA’s lack of a systematic approach to developing and issuing guidance for these labs. In response to the DOT OIG’s findings of oversight weaknesses from 2010, PHMSA strengthened its activities to approve and monitor test labs. Specifically, to ensure that test lab examiners meet the requirements specified in regulation, PHMSA established an approval process that includes interviewing examiners and reviewing test lab and examiner qualifications and recertifying test labs every 5 years through an on-site inspection. PHMSA officials stated that in addition to ensuring compliance with regulations, an important goal of PHMSA’s oversight of test labs is to promote consistency across test labs. PHMSA officials stated that efforts to promote consistency are particularly important given turnover—half of test labs were approved in 2012 or later. However, while PHMSA works to promote consistency through various types of communications with test labs, it lacks a systematic approach to determining what guidance is needed and to issuing such guidance. Internal control standards state that agencies should communicate quality information so that external parties can help the agency achieve its objectives and address related risks. Likewise, we have previously reported that agencies benefit from procedures that continually reassess and improve guidance processes and documents to respond to the concerns of regulated entities. According to PHMSA officials, regulations and international standards described in a United Nations test manual set forth certain requirements for testing new explosives. However, PHMSA officials told us they give test labs flexibility in how to apply these requirements on a case-by-case basis when testing new explosives to recommend classifications. PHMSA officials told us that granting this flexibility allows test labs to use their expertise and professional judgment. For example, PHMSA officials noted examiners can deviate from the proscribed tests in the test manual if they justify their reasoning for doing so based on their expertise, and officials at one test lab stated that, given the variation in the specific attributes of different new explosives, such flexibility can improve their ability to effectively test a new explosive. However, PHMSA and some test lab officials also stated that this flexible approach can lead to inconsistencies across test labs, such as a test lab examiner being unaware of a common deviation from the proscribed test or similar types of explosives being subject to different tests. For example, one examiner reported learning of an “unwritten rule” that test labs can use alternative tests in cases where multiple samples of the explosive cannot be destroyed due to costs, such as for large and expensive explosives. Four of the six test labs we spoke to said written guidance from PHMSA could help address “unwritten rules” such as commonly used modifications to the test manual. Although four of the five manufacturers we spoke to said the labs are consistent in quality of testing, two noted that test lab report quality can vary. PHMSA has ongoing efforts to promote consistency across test labs. Specifically, PHMSA officials stated that they: Discuss issues, best practices, and PHMSA’s recommended approaches with test lab examiners during in-person annual meetings and quarterly teleconferences. PHMSA also distributes agendas and minutes associated with these teleconferences and meetings. Five of the six test labs we spoke to said PHMSA’s quarterly teleconferences and annual meetings are helpful to share issues and good practices. However, three test labs noted that test lab examiners are hesitant to volunteer information or ask questions during these meetings since the test labs compete with one another for business. As one test lab noted, if one test lab has information that the others do not, this test lab has an incentive not to mention this information in order to have a business advantage over competitors. Issue letters of interpretation to communicate PHMSA’s views on specific issues in response to questions or concerns, which are posted on PHMSA’s website. However, PHMSA officials and one test lab stated that while the letters of interpretation include topical guidance, they are hard to find, are not organized by topic, and can contradict each other. As described above, PHMSA’s approach to providing guidance is not systematic, and therefore PHMSA may be missing areas where more guidance would be beneficial to helping test labs understand PHMSA’s expectations and to improving PHMSA’s ability to reach its objective of consistency among test labs. Moreover, PHMSA officials acknowledge that they currently do not have a comprehensive written document that encompasses all PHMSA guidance for test labs. They stated that they are currently evaluating whether to issue such a document. PHMSA officials stated that the development of such a document could involve compiling the content of existing written communication such as the letters of interpretation, which, as described above, PHMSA officials stated are not currently easily accessible. However, PHMSA officials have not specified a systematic approach for these efforts, including an effort to identify test labs’ needs such as explaining the “unwritten rules” that affect PHMSA’s expectations for test labs. Without such an approach to improving its guidance, PHMSA may not be providing test labs with the information needed to effectively meet PHMSA’s goal of promoting consistency. PHMSA has a multi-part application review and approval process for the classification of new explosives, including a check for completeness by a program officer, two levels of technical review, and the completion of an evaluation form (see fig. 2). According to PHMSA officials, PHMSA’s role in approving classifications, which is outlined in regulation, is essential to fulfilling its role in regulating hazmat transportation in the U.S. Furthermore, PHMSA officials say its application review process serves quality assurance purposes, since test labs, which are paid by the manufacturers, compete for business and may be pressured by manufacturers to provide a specific classification. Two of the six test labs we spoke to reported facing pressure from manufacturers on how to conduct tests or which classification to recommend. A third test lab also stated that manufacturers often aim to have their explosive classified as class 1.4 since, as mentioned previously, such explosives can be transported by aircraft. One manufacturer noted that having an explosive classified as a 1.4 to travel by aircraft makes the product more competitive since air travel is the fastest option to get the product to an overseas customer. PHMSA officials stated that they are checking for multiple things in their technical reviews, and that in addition to ensuring that explosives are classified correctly, these technical reviews also help PHMSA to improve test labs’ performance. Specifically, PHMSA officials stated that they keep notes in an Excel spreadsheet on recurring issues identified in application reviews, which are then used to inform topics for annual and quarterly meetings with the test labs. According to PHMSA officials, its overarching goal in reviewing applications is to ensure that every new explosive is classified correctly, and as a result, application review processing times can vary greatly depending on many factors, such as the quality of the application; the complexity of the application and the explosive device; and the timeliness of test-lab response when PHMSA technical reviewers reach out with questions or requests for additional information. PHMSA officials also noted that technical reviewers may give some applications by more recently approved test labs or examiners a closer look to develop a “comfort level” with their testing procedures, which can slow the review. Similarly, officials noted there has been turnover among PHMSA’s technical reviewers, which can slow the process since the second-level technical reviewer provides guidance and feedback to the first-level reviewers as part of their training. PHMSA officials also noted that they began emphasizing the process step of completing an evaluation form for each application in response to DOT OIG’s 2014 report, which affected application review times. Other stakeholders we spoke with had varying views on PHMSA’s review process. The manufacturers and the explosives manufacturing association we spoke to generally had two key complaints about PHMSA’s process, seeing it as overly time consuming and opaque. Time-Consuming: Industry stakeholders noted that the time required for PHMSA’s review can be lengthy. Specifically, four of the five manufacturers and an explosives industry association said PHMSA’s review takes too long, while one manufacturer noted that turnaround times have recently improved. For example, three manufacturers told us that in their experience PHMSA’s review often takes up to 6 months. Manufacturers and the association stated that PHMSA’s review process delays manufacturers’ ability to get a return on investments made in developing a new explosive. Opaque: Some manufacturers noted that PHMSA’s review process is opaque, leaving the manufacturer unsure of their application’s status. In particular, four of the five selected manufacturers and an explosives manufacturing industry association we spoke to stated that PHMSA could better communicate where applications are in the review process. Manufacturers and the explosives industry association stated that the uncertainty surrounding PHMSA’s review process creates business planning uncertainties, such as when to allocate staff and resources to manufacturing and sales. Furthermore, some manufacturers and the manufacturers’ association questioned the value of what they see as an overly time-consuming and opaque process and suggested that PHMSA should place more trust in the test lab results to reduce the amount of time PHMSA takes to review applications. For example, one manufacturer noted that PHMSA’s review adds little value since, in the manufacturer’s experience, PHMSA rarely changes a classification from the one recommended by the test lab. This manufacturer noted that PHMSA’s efforts would be best suited to overseeing the test labs rather than reviewing applications and that the test lab should have the final determination on the classification. In contrast to some manufacturers’ views, other stakeholders, including carrier associations and test lab examiners, were supportive of PHMSA’s oversight role, including the review and approval of test labs’ classification recommendations. One air carrier association stated that PHMSA’s oversight is critical given what the association believes to be the test labs’ potential conflict of interest since test labs are paid by manufacturers and since explosives, if misclassified, could pose a major risk during air transport. The carrier associations we spoke to that represent air, rail, and trucking modes noted that PHMSA’s oversight is effective insofar as explosives are classified correctly and incidents are rare. Specifically, according to PHMSA data, PHMSA receives about 16,000 hazmat incident reports each year, and of that number, an average of about 35 per year involved explosives between 2005 and 2015. A total of 388 such incidents occurred during that time period. Over the same time period, excluding fireworks, only two incidents involving explosives resulted in injuries, but no such incidents resulted in fatalities. In addition, all of the six test labs we spoke with were generally supportive of PHMSA’s oversight role. For example, one test lab noted that PHMSA’s review is important for liability protection for the test labs, and expressed discomfort at the notion of test labs becoming responsible for assigning approvals without PHMSA’s review, noting that while such a change could speed the approval process, it could also decrease quality and consistency across test labs. Recently, PHMSA has taken some steps to respond to manufacturers’ concerns about the uncertainty and lack of transparency of the process. PHMSA established an internal goal of 120 days for average application processing times and, since the second quarter of fiscal year 2015, has posted quarterly average application processing times on its website. For the second quarter of fiscal year 2016, PHMSA reported the average was 99 days. In addition, in a February 2016 memo on reforming the explosive classification approval review process, PHMSA noted that to increase transparency, it had increased the information provided to manufacturers in its online status reports so that manufacturers could have better visibility into where each application is in the process. Representatives from one manufacturer we spoke to stated they appreciated the new ability to better track the status of applications through PHMSA’s review process. In its February 2016 memo, which cited manufacturers’ concerns with the approval process, PHMSA outlined improvement efforts that align with goals in PHMSA’s Office of Hazardous Materials Safety 2013-2016 strategic plan. These goals include increasing outreach, streamlining the regulatory system, and enhancing risk management. To increase outreach, as described above, PHMSA increased the amount of information about applications online. PHMSA also described several efforts that align with the goals of streamlining the regulatory system and enhancing risk management. For example, in the memo, PHMSA stated it would immediately begin a one-level technical review for certain applications, including explosives classified as 1.1. Although these explosives pose the greatest risk of mass explosion, PHMSA officials stated that the review is relatively low risk since these explosives have the most stringent packaging and transportation requirements. To be classified as 1.1, the explosive must meet the standard of being stable and not forbidden from transport but does not need to undergo further testing and scrutiny that would be required to determine whether an explosive could be classified at a lower risk that, for example, might allow it to go on an aircraft. In the reform memo, PHMSA also indicated that it would continue to look for future opportunities for streamlining. In particular, PHMSA said it would look for (1) other types of applications where a streamlined one-level technical review is appropriate and (2) specific types of explosive approvals that could be standardized in the regulations, which could allow for self-classification, meaning the manufacturer could determine the classification. PHMSA officials stated that identifying areas for self-classification involves research conducted either by PHMSA or by the industry to determine whether the standard is sufficient to reduce the amount of time PHMSA takes for its review without increasing the risk of misclassifying explosives. Despite these recent reform efforts, PHMSA officials stated that limited staff resources create challenges for application turnaround. According to data provided by PHMSA, between 2006 and 2015, on average, PHMSA officials reviewed 1,700 applications for new explosive classifications annually. As mentioned previously, each of these applications is subject to a multi-step review process, including a completeness check by a project officer, a two-level technical review, and approval or rejection letter signed by an approving official. As noted in figure 3, as of the time of our review, there were seven PHMSA officials involved with this review process—two project officers, four technical reviewers (three first level and one second level), and one primary approving official. Furthermore, these seven officials have other responsibilities outside of reviewing explosives classifications. For example, technical reviewers assist with high priority hazmat issues such as crude oil by rail and lithium ion batteries, and the second level technical reviewer also supervises the work of the first level reviewers and represents PHMSA in international working groups. According to PHMSA officials, although these activities are important to the agency, they can take time away from explosives approvals. Another challenge that affects PHMSA’s improvement efforts is a lack of sufficient data and data planning. Internal control standards for the federal government state that management should design the entity’s information system to achieve objectives and respond to risks. These standards state that an information system includes both manual and technology-enabled information processes and represents the life cycle of information used for the entity’s operational processes. To effectively design an entity’s information system, the standards state that entities should consider the information requirements, including the expectations and needs of both internal and external users. Currently, according to PHMSA officials, PHMSA stores classification application information in a system—called the FYI system—that is a document management system, not a database. PHMSA officials stated that the FYI system does not allow for standard fields to be entered that could then be easily analyzed across applications. Instead, PHMSA reviewers complete a separate Microsoft Word document evaluation form for each approval that is filed electronically. PHMSA officials noted that in addition to not being designed for evaluation or analysis across applications, the manual process of filling out the evaluation form can cause delays in the application review process. Due to the limitations of this current data system, PHMSA was unable to provide us data that we requested in several areas. Specifically, PHMSA was unable to provide data on the following: The number of applications in which PHMSA approved a classification that was different from the one recommended by the test lab. PHMSA officials stated that to compile this data would be a manual process requiring staff to go through each application file and manually compare the recommended classification from the test lab to the final approved classification. PHMSA officials stated that as a result of these limitations, PHMSA does not know how many applications required such a change. A manufacturer’s association suggested that information on how often PHMSA changes a test lab’s recommended classification could help inform the extent to which PHMSA’s final review adds value to the process. PHMSA officials stated they did not think that such information would be instructive in the level of value the final PHMSA review adds since currently, the quality of test labs’ recommendations may be influenced by the knowledge that PHMSA will be providing a final review. However, PHMSA officials stated that they would like to be able to analyze this type of information. The amount of time applications spent in each part of PHMSA’s review process. PHMSA officials stated that although they had developed a method to track information on applications’ total time in PHMSA’s review process, to obtain more detailed information on how long applications spent in each part of the process would be difficult and time consuming given the limitations of the current system. The amount of time different types of applications spent in PHMSA’s review process. Because of the effort involved in this type of analysis, PHMSA officials stated that PHMSA does not have historical information on timeframes for different types of explosives applications. However, officials stated they had implemented a software fix that would allow them to obtain this information going forward. The number of applications for which PHMSA had to request additional information from test labs, which PHMSA officials cited as a common reason for extended timeframes of application reviews, or any information on the reasons PHMSA had to request additional information from test labs. Such information could potentially help PHMSA target guidance efforts to labs. The extent to which the length of PHMSA’s review of new applications varied by test lab, which could potentially help PHMSA analyze the consistency of test lab reports and target guidance efforts to labs. PHMSA officials described several ongoing efforts to improve the agency’s data systems. PHMSA officials stated that PHMSA’s goal is to develop a risk-based, data-driven system that allows PHMSA to use data to identify potential risks, to address workflow process weaknesses, and to use resources more effectively. Moreover, PHMSA officials stated that fiscal year 2016 funds have been designated to transition the FYI system to a PHMSA-wide portal to better capture information in applications and develop the ability to analyze information across applications. In addition, PHMSA officials stated that a statistician has joined their staff to assist with data analysis, and PHMSA has three contracts with outside organizations to improve PHMSA’s use of data for analytics, enhanced risk management, and quality assurance. While PHMSA has taken these steps towards improving its data system, and officials described some desired information fields and capabilities, the agency does not have a plan documenting the data fields or information needs required to reach its stated goals. PHMSA officials stressed one narrower aim for the new system—to provide greater transparency and predictability in the review process to better meet the needs of explosives manufacturers. For example, PHMSA officials stated that they would like the ability to track and report average processing times for each review process step. PHMSA officials stated that processing time information could also help the agency manage its staffing resources. While PHMSA officials discussed their desire for this added capability, they did not describe in a systematic way and have not documented in a plan how an improved information system could potentially help meet their goals of streamlining the review system and enhancing risk management, or how it could help them meet their stated goals of using data to identify potential risks, address workflow process weaknesses, and use resources more effectively. For example, while, as described above, PHMSA identified several factors, such as an application’s quality and complexity of the application and the explosive device, that can result in a longer application review time, PHMSA did not discuss or provide documentation considering how an improved information system could allow it to better track and analyze these issues. Similarly, while PHMSA officials stated that test labs’ quality and experience could affect the length of its review, PHMSA officials have not developed data fields that could highlight such information in applications and be incorporated into the new system, potentially improving PHMSA’s ability to analyze the quality and consistency both across test labs and the reviewers. This could help the agency target efforts to streamline its process. Finally, as mentioned previously, PHMSA has recently identified reforms such as a one-level technical review and proposed standardization for certain types of explosives, and has noted that it would like to explore new areas for these opportunities. Without defining the necessary data elements to capture during the classification application review process, PHMSA may be unable to analyze the effects of previous reform efforts or identify opportunities for further reform. Without a systematically developed plan, PHMSA may miss opportunities to create an information system that allows PHMSA to better meet the expectations of both external stakeholders, such as manufacturers, and internal stakeholders—i.e., PHMSA officials themselves. Although PHMSA has taken steps to strengthen its oversight of the explosives classification process that align with its strategic goals, PHMSA’s ability to be responsive to stakeholder concerns and to overcome challenges may be limited without a more systematic approach to improvements in two areas: guidance to test labs and information systems. Without systematically determining what guidance would most benefit test labs and how to best communicate this guidance to test labs, PHMSA’s efforts to support increased consistency among test labs and respond to test labs’ desire for clearer guidance in certain areas may fall short. Similarly, without developing a data plan that clearly defines what information PHMSA most needs to meet its various objectives, the effectiveness of PHMSA’s data improvement efforts to meet the expectations of internal and external users may be reduced. In contrast, a carefully developed and implemented data plan could potentially help PHMSA respond to manufacturers’ concerns about the timeliness of PHMSA’s review process and mitigate PHMSA’s challenges related to limited staff resources while also helping it meet its goals related to outreach, transparency, and developing a risk-based approach. To improve PHMSA’s oversight of the explosives classification process, the Secretary of Transportation should direct the PHMSA Administrator to take the following two actions: 1. Develop and implement a systematic approach for improving the guidance PHMSA provides test labs. 2. Develop a written plan describing information requirements for PHMSA’s new data system. Such a data plan should include information requirements needed to meet PHMSA’s goals and address risks. We provided a draft of this product to the Department of Transportation (DOT) for comment. In written comments, reproduced in appendix I, DOT concurred with our recommendations. DOT stated that it continues to dedicate resources to improve the safety, oversight, and system efficiency of the explosives classification approval program and described several recent actions taken to enhance its oversight. In addition, DOT provided a technical comment that we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Alwynne Wilbur (Assistant Director), Tim Bober, Tara Carter, David Hooper, Emily Larson, SaraAnn Moessbauer, Malika Rice, Amy Rosewarne, and Kelsey Sagawa made key contributions to this report.
Explosives accounted for 4 million of the total 2.6 billion tons of hazardous materials transported in the U.S. in 2012. DOT's PHMSA is responsible for regulating the transport of explosives, which includes classifying new explosives prior to transportation. The classification denotes the risk level and requirements, such as which transportation modes can be used to transport the explosive. To be classified, an explosive must be examined by a PHMSA-approved third-party test lab. PHMSA must then approve the test lab's classification recommendation. The Fixing America's Surface Transportation Act includes a provision for GAO to review DOT's oversight of this process. This report addresses: (1) PHMSA's oversight of the classification of new explosives and related stakeholder views and (2) PHMSA's efforts to improve the oversight process and any associated challenges. GAO collected PHMSA data on applications processed (2006-2015) and explosives incidents (2005-2015) and interviewed officials from PHMSA, all six approved test labs, carrier and explosive manufacturer associations, and five explosives manufacturers selected in part to represent a range of industries. The Department of Transportation's (DOT) Pipeline and Hazardous Materials Safety Administration's (PHMSA) oversight of the labs that issue classification recommendations for new explosives is limited by a lack of guidance, and stakeholders have mixed views on PHMSA's oversight. To receive a classification for a new explosive, manufacturers must have an approved test lab examine the explosive and submit an application to PHMSA with the test lab's recommended classification. PHMSA's oversight includes: (1) approving and monitoring the test labs and (2) reviewing applications and classification recommendations. Although PHMSA has several activities to oversee test labs, its efforts to promote test lab consistency—one objective of its oversight—are hindered by the lack of a systematic approach to developing guidance. GAO has reported that agencies benefit from procedures to improve guidance to respond to regulated entities' concerns. PHMSA officials stated that they grant test labs flexibility on how to apply standards and regulations. However, four of the six test labs said guidance could explain “unwritten rules” such as common testing modifications. Without a systematic approach to determining what guidance is needed, PHMSA's ability to achieve consistency is limited. Stakeholder views on PHMSA's oversight processes, in particular its process for approving classification recommendations, are mixed. PHMSA officials view their role as final approver of classifications as critical. However, some manufacturers stated that PHMSA's review process is time consuming and opaque and questioned whether it adds value. In contrast, other stakeholders such as carrier associations were supportive of PHMSA's oversight role. In 2015, PHMSA began taking steps to improve the transparency of its process by, for example, posting information online on average application processing times. PHMSA has begun oversight improvement efforts that align with its strategic goals to increase outreach, streamline its classification application review process, and enhance risk management, but it faces staffing and data challenges. For example, PHMSA has eliminated one of two technical reviews for certain explosive classification applications. According to PHMSA officials, such streamlining could help PHMSA better manage its limited staff resources—7 PHMSA officials process an average of 1,700 new explosives annually, along with other duties—a workload that creates challenges for application turnaround. However, PHMSA's data challenges reduce its ability to strategically improve its process, and PHMSA lacks a plan for data system improvements under way. Internal control standards state that management should design the entity's information system to achieve objectives and respond to risks. Currently, PHMSA's data system does not allow the agency to aggregate or analyze most information across applications, limiting PHMSA's ability to analyze its processes or outcomes in order to achieve objectives or respond to risks. PHMSA officials stated that fiscal year 2016 funds have been designated to upgrade its system, but PHMSA does not have a plan documenting what fields or information needs are required for the system in order to reach agency goals. Without a plan to guide the development of the new data system, PHMSA may miss opportunities to use it to better manage staff resources, identify future and evaluate past reforms, and meet agency goals. To improve oversight of the classification of new explosives, PHMSA should (1) develop and implement a systematic approach for improving PHMSA's guidance for test labs; and (2) develop a written plan describing information requirements for its new data system. DOT concurred with the recommendations.
Located organizationally within the Department of the Treasury, the Office of Thrift Supervision through its five regional offices supervises 1,210 federal and state chartered savings institutions—commonly called thrifts—to maintain the safety, soundness, and viability of the industry. Thrifts primarily emphasize residential mortgage lending and are an important source of housing credit. Most of these institutions have assets of under $500 million and are locally owned and managed. Together, they are responsible for about $770 billion in assets. As part of its goal of maintaining safety and soundness, OTS is responsible for examining and monitoring thrifts’ efforts to adequately mitigate the risks associated with the century date change. To ensure consistent and uniform supervision on Year 2000 issues, OTS and the other regulators coordinate their supervisory efforts through FFIEC. For example, the regulators jointly prepared and issued an August 1996 FFIEC letter to banks, thrifts, and credit unions informing them of the Year 2000 problem and its potential adverse impacts. Together, they also developed and issued in May 1997 an FFIEC examination program and guidance on how to use it. More recently, the regulators established an FFIEC working group to develop guidance on mitigating the risks associated with using contractors that provide automated systems services and software to thrifts. According to OTS, virtually every insured financial institution relies on computers—either their own or those of a third-party contractor—to process and update records and to perform a variety of other functions. Because computers are essential to their survival, OTS believes that all its institutions are vulnerable to the problems associated with the year 2000. Failure to address Year 2000 computer issues could lead, for example, to errors in calculating interest and amortization schedules. Moreover, automated teller machines may malfunction, performing erroneous transactions or refusing to process transactions. In addition, errors caused by Year 2000 miscalculations may expose institutions and data centers to financial liability and loss of customer confidence. Other supporting systems critical to the day-to-day business of thrifts may be affected as well. For example, telephone systems, vaults, and security and alarm systems could malfunction. In addressing the Year 2000 problem, thrifts must also consider the computer systems that interface with, or connect to, their own systems. These systems may belong to payment system partners, such as wire transfer systems, automated clearinghouses, check clearing providers, credit card merchant and issuing systems, automated teller machine networks, electronic data interchange systems, and electronic benefits transfer systems. Because these systems are also vulnerable to the Year 2000 problem, they can introduce errors into thrift systems. In addition to these computer system risks, thrifts also face business risks from the year 2000, that is, exposure from its corporate borrower’s inability to manage their own Year 2000 compliance efforts successfully. Consequently, in addition to correcting their computer systems, thrifts have to periodically assess the Year 2000 efforts of large corporate customers to determine whether they are sufficient to avoid significant disruptions to operations. OTS and the other regulators established an FFIEC working group to develop guidance on assessing the risk corporate borrowers pose to thrifts. OTS has taken a number of actions to raise the awareness of the Year 2000 issue among thrifts and to assess the Year 2000 impact on the industry. To raise awareness, OTS formally alerted thrifts in August 1996 to the potential dangers of the Year 2000 problem by issuing an awareness letter to thrift chief executive officers. The letter, which included a statement from the interagency Federal Financial Institutions Examination Council, described the Year 2000 problem and highlighted concerns about the industry’s Year 2000 readiness. It also called on thrifts to perform a risk assessments of how systems are affected and develop a detailed action plans to fix them. In May 1997, OTS, along with the other regulators, issued a more detailed awareness letter that described the five-phase approach to planning and managing an effective highlighted external issues requiring management attention, such as reliance on vendors, risks posed by exchanging data with external parties, and the potential effect of Year 2000 noncompliance on corporate borrowers; discussed operational issues that should be considered in Year 2000 planning, such as whether to replace or repair systems; related its plans to facilitate Year 2000 evaluations by using uniform examination guidance and procedures; and directed thrifts to (1) inventory core computer functions and set priorities for Year 2000 goals by September 30, 1997, and (2) to complete programming changes and to have testing of mission-critical systems underway by December 31, 1998. As of November 30, 1997, OTS had completed its initial assessment of all thrifts for which it has supervisory responsibility. In conducting this assessment, OTS performed off-site examinations of the thrifts that addressed whether (1) their systems were ready to handle Year 2000 processing, (2) they had established a structured process for correcting Year 2000 problems, (3) they prioritized systems for correction, (4) they had determined the Year 2000 impact on other internal systems’ important to day-to-day operations, such as vaults, security and alarm systems, elevators, and telephones, (5) they had estimated Year 2000 project costs and targeted sufficient resources, (6) their milestones for renovating and testing mission-critical systems were consistent with those recommended by FFIEC, and (7) they had been closely tracking the progress of service bureau and vendor Year 2000 remediation efforts. Thrifts were also asked to submit Year 2000 assessment reports, action plans, and their most recent progress reports. According to OTS, this assessment showed that the thrift industry was generally aware of and addressing the potential impact of Year 2000. For example, 94 percent of thrifts had assigned Year 2000 oversight duties or a senior officer or committee and 90 percent were then developing a Year 2000 action plan. However, OTS did find that about 170 thrifts were designated at high risk due to poor performance in conducting awareness and assessment phase activities. OTS is following up on this initial assessment with on-site exams to all thrifts to be completed by the end of June 1998. To help thrifts prepare for these visits, OTS developed a detailed Year 2000 checklist. It is a self-assessment tool addressing the five phases of the Year 2000 correction process and about 10 other areas, including reliance on vendors and borrowers’ credit risk that informs thrifts of key activities to be performed and allows them to quantify their progress. OTS also issued additional examination guidance and procedures to supplement those of the FFIEC. This supplemental guidance, if implemented correctly, will address the FFIEC examination procedure shortcomings (i.e., lack of detailed questions, vague terminology) reported in our previous testimony. To ensure OTS completes the on-site visits by June 1998, each regional office has been given the authority to establish its own plans for assessing institutions. OTS’ national Year 2000 coordinator is currently reviewing regional plans to assess their reasonableness. To make sure regions stay on track, the coordinator is monitoring regional progress in completing the on-site reviews on a biweekly basis and, starting in April, on a weekly basis. More recently, on March 13, 1998, OTS issued a memorandum to the regional offices that, among other things, reiterated its supervisory goal of ensuring that the thrift industry becomes Year 2000 compliant and provided guidance on exam followup for thrifts assigned a Year 2000 rating less than satisfactory. OTS has also been participating with other regulators to conduct on-site Year 2000 assessments of major data processing servicers and software vendors. These servicers and vendors provide support and products to a majority of financial institutions. OTS and the other regulators expect to complete their first round of servicer and vendor assessments in April 1998. OTS is providing the results of the servicer assessments to OTS-supervised thrifts that use these services. Together with the results of on-site assessments conducted at thrifts, OTS expects to have a better idea of where the industry stands, which thrifts need close attention, and thus where to focus its supervisory efforts. As noted in our summary, OTS must successfully address a number of issues to provide adequate assurance that the thrift industry will meet the Year 2000 challenge. Also noted, these issues for the most part are similar to those we found at FDIC and NCUA. First, like the other regulators, OTS is behind in assessing individual institution’s readiness. As with NCUA and FDIC, OTS got off to a late start assessing the readiness of the institutions it oversees and, consequently, was late in completing assessment phase activities. For example, it did not complete its initial assessment of all thrifts until November 1997. According to OMB guidance and our Assessment Guide, these activities should have been completed by the summer of 1997. Because OTS is behind the recommended timelines, the time available for assessing institutions’ progress during renovation, validation, and implementation phases and for taking needed corrective actions is compressed. Second, OTS and the other regulators are still developing key guidance to help institutions complete their Year 2000 efforts. In their May 1997 letter to thrifts, banks, and credit unions, the financial regulators recommended that institutions begin (1) developing contingency plans to mitigate the risk that Year 2000-related problems will disrupt operations and (2) ensuring that their data processing services, software vendors, and large corporate customers are making adequate Year 2000 progress. In recommending these measures, the regulators noted that they have found that some financial institutions were heavily relying on their service providers to solve their Year 2000 problems. They outlined an approach for dealing with vendors that included (1) evaluating and monitoring vendor plans and milestones, (2) determining whether contract terms can be revised to include Year 2000 covenants, and (3) ensuring that vendors have the capacity to complete the projects and are willing to certify Year 2000 compliance. The regulators also noted that all institutions—even those who have Year 2000-compliant systems—could still be at risk if they have significant business relations with corporate customers who, in turn, have not adequately considered Year 2000 issues. If these customers default or are late in repaying loans, then banks and thrifts could experience financial harm. The regulators recommended that institutions begin developing processes to periodically assess large corporate customer Year 2000 efforts and to consider writing Year 2000 compliance into their loan documentation. The regulators agreed to provide guidance on contingency planning and dealing with vendors and borrowers. The guidance on vendors and borrowers is expected to be issued in mid-March 1998 and the contingency planning guidance by the end of April 1998. As noted in our last testimony, these time lags in providing guidance increase the risk that thrifts have taken little or no action on contingency planning and dealing with vendors and corporate borrowers in anticipation of pending regulator guidance. Moreover, in the absence of guidance, thrifts may have initiated action that does not effectively mitigate risk of Year 2000 failures. Third, although OTS has been working hard to assess industrywide compliance, it has yet to determine the level of technical resources needed to adequately evaluate the Year 2000 conversion efforts of the thrifts and vendors who service them. Instead, OTS is using its existing resources to perform the evaluations. Specifically, OTS is using its 24 information systems examiners to (1) evaluate the progress of the roughly 250 institutions with in-house or complex systems, (2) work with systems examiners from the other regulators to assess the progress of about 260 computer centers of data processing vendors that service thrifts, and (3) assist 84 OTS safety and soundness examiners with their evaluations of the remaining 1,000 institutions that rely heavily or entirely on vendors. As institutions and vendors progress in their Year 2000 efforts, we are concerned that the evaluations of the examiners will increase in length and technical complexity, and put a strain on an already small pool of technical resources. Without sufficient resources, OTS could be forced to slip its schedule for completing the current on-site exams or, worse, reduce the scope of its evaluations in order to meet its deadline. In the first case, institutions would be left with less time to remediate any deficiencies. In the second, OTS might overlook issues that could lead to failures. In either case, the risk of noncompliance by thrifts and service bureaus—and the government’s exposure to losses—is significantly increased. OTS officials told us they are in the process of adding four additional systems examiners. They also believe that it is effective to use its safety and soundness examiners to perform Year 2000 assessments at the thrifts not visited by the system examiners. Finally, these officials expressed concern that even if they could hire more technical examiners, it is very hard to find and hire staff with these skills. However, without the requisite analysis, OTS cannot know whether adding four additional examiners will meet it needs. In addition, by using safety and soundness examiners, OTS runs the risk of having examiners make incorrect judgments about the readiness of thrifts. This risk will only increase as we get closer to the millennium because the latter phases of correction—renovation, testing, and implementation—take a higher level of technical knowledge to asses whether these steps are performed correctly. Looking forward, the challenge for OTS—and the other regulators—is to make the best use of limited resources in the time remaining. The challenge is immense: thousands of financial institutions, numerous service providers and vendors, and a finite number of examiners and time to address the problem. By mid-1998, however, OTS and the other regulators should have available a good picture of how their industry stands. The on-site examinations will be complete as will the assessment of vendors and service providers. This information should provide good definition as to the size and magnitude of the problem. That is, how many institutions are at high risk of not being ready for the millennium and require immediate attention and which service providers are likely to be problematic. Further, by carefully analyzing available data, OTS should be able to identify common problems or issues that are generic to thrifts that are of similar size, use specific service providers, etc. This in turn will allow regulators to be able to develop a much better understanding of which areas require attention and where to focus limited resources. In short, regulators have an opportunity to regroup, develop specific strategies, and have a more defined sense of where the risks lie and the actions required to mitigate those risks. OTS internal systems are critical to the day-to-day operation of the agency. For example, they facilitate the collection of thrift assessments, monitor the financial condition of thrifts, provide the Congress and the public with information on thrift mortgage activity, schedule and track examinations, and calculate OTS employee payroll benefits. As with the other regulators, the effects of Year 2000 failure on OTS could range from annoying to catastrophic. OTS system failures could, for example, result in inaccurate or uncollected assessments, inaccurate or unpaid accounts payable, and miscalculated payroll and benefits. Because of the systems’ importance, Treasury hired a contractor to assess OTS’ internal Year 2000 efforts, and the contractor reported its results in October 1997. The contractor reported that OTS had made good progress in completing its assessment phase activities and was well underway in performing renovation and testing for selected systems. We also found that OTS was making substantial progress in remediating its systems. For example, 13 of OTS’ 15 mission-critical systems have already been renovated, tested, and implemented. The remaining two—the Home Mortgage Disclosure Act system and the Interest Rate Risk system—are expected to be completed by the end of this year. OTS has also inventoried and assessed the nonmission-critical systems that were developed and maintained outside the Information Resources Management office at OTS’ headquarters. In addition, it has assessed other electronic equipment important to day-to-day operations, such as telecommunications equipment, office equipment, security systems, and personal computers and made plans to modify or replace the equipment it identified as being noncompliant. Despite OTS’ good efforts to convert its internal systems, the contractor (1) found that OTS had not prepared contingency plans as part of its assessment phase activities and (2) recommended that it develop such plans. As of the time of our work, OTS had not yet implemented this recommendation. It was still developing these plans to ensure continuity of operations in the event its remediated systems fail or the two systems being renovated are not fixed in time. Our Assessment Guide calls on agencies to initiate contingency plans during the assessment phase so that they have enough time to (1) identify the manual or other fallback procedures, (2) define the specific conditions that will cause the activation of these procedures, and (3) test the procedures. The agency expects to complete these plans by the middle of 1998. Our final concern is that even though OTS has corrected the majority of its mission-critical systems and is making good progress toward remediating other systems and equipment, it does not have a comprehensive Year 2000 program plan. To its credit, the agency has prepared plans for correcting its systems and has been reporting its progress to Treasury on a monthly basis. However, OTS did not develop a single plan providing a clear understanding of the interrelationships and dependencies among the automated systems that support its business operations, such as thrift supervision, office equipment, payroll, and facilities. Instead, OTS officials told us they prepared separate plans for (1) systems operated and maintained by the Information Resources Management office, (2) systems operated and maintained by other offices and regions, and (3) office equipment and facilities. Without an integrated plan, OTS cannot provide assurance that all systems and interrelationships had been assessed and corrected. This increases the risk that systems will not operate as intended in the year 2000 and beyond. In conclusion, Mr. Chairman, we believe that OTS has a good appreciation for the Year 2000 problem and has made significant progress, especially with regard to its effort in correcting its own systems. However, OTS and the other regulators are facing a finite deadline that offers no flexibility. OTS needs to take several actions to improve its ability to enhance the ability of thrifts to meet the century deadline with minimal problems and to enhance the agency’s ability to monitor the industry’s efforts and to take appropriate and swift measures against thrifts that are neglecting their Year 2000 responsibilities. We, therefore, recommend that OTS work with the other FFIEC members to complete their guidance to institutions on mitigating the risks associated with corporate customers and reliance on vendors. Further, OTS should work with the other FFIEC members to complete the contingency planning guidance by its April 1998 deadline. Additionally, a combination of factors—including starting the thrift assessment process late and issuing more specific guidance to thrifts at a relatively late date—are hindering OTS’ and the other regulators’ ability to develop more positive assurance that their institutions will be ready for the year 2000. Accordingly, we recommend that OTS work with the other FFIEC members to develop, in an expeditious manner, more explicit instructions to thrifts for carrying out the latter stages of the Year 2000 process—renovation, validation, and implementation—which are the critical steps to ensuring Year 2000 compliance. Because OTS and the other regulators will have more complete information on the status of institutions, servicers, and vendors by mid-1998, we recommend that OTS work with the other FFIEC members to develop a tactical plan that details the results of its assessments and provides a more explicit road map of the actions it intends to take based on those results. This should include an assessment of the adequacy of OTS’ technical resources to evaluate the Year 2000 efforts of the thrifts and the servicers and vendors that service them. Finally, with regard to OTS’ internal systems, we recommend that the Director instruct the agency to develop (1) contingency plans for each of OTS’ mission-critical systems and core business processes and (2) a comprehensive Year 2000 program plan. Mr. Chairman, that concludes my statement. We welcome any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the progress being made by the Office of Thrift Supervision (OTS) in ensuring that the more than 1,200 thrifts it oversees have adequately mitigated the risks associated with the year 2000 date change. GAO noted that: (1) the year 2000 problem poses a serious dilemma for thrifts due to their heavy reliance on information systems; (2) regulators have a monumental task in making sure that financial institutions have adequate guidance in preparing for the year 2000 and in providing a level of assurance that such guidance is being followed; (3) further, regulators will likely face some tough decisions on the readiness of individual institutions as the millennium approaches; (4) GAO found that OTS is taking the problem very seriously and is devoting considerable effort and resources to ensure the thrifts it oversees mitigate the year 2000 risks; (5) despite aggressive efforts, OTS still faces significant challenges in providing a high level of assurance that individual thrifts will be ready; (6) in fact, the problems GAO found at OTS are generally the same as those found at the other regulators reviewed; (7) OTS was late in addressing the problem and consequently, is behind the year 2000 schedule recommended by both GAO and the Office of Management and Budget; (8) in addition, key guidance--being developed under the auspices of the Federal Financial Institutions Examination Council (FFIEC)--needed by thrifts and other financial institutions to complete their own preparations is also late which, in turn, could potentially hurt individual institutions' abilities to address year 2000 issues; (9) OTS needs to better assess whether it has an adequate level of technical resources to evaluate the industry's year 2000 efforts; (10) these problems hinder the regulators' ability to develop more positive assurance that institutions will be ready for the century date change; (11) consequently, the challenge for them at this point is how can they use their resources from here to the millennium to ensure that thrifts, banks, and credit unions mitigate year 2000 risks; (12) OTS has done much to mitigate the risk to its mission-critical internal systems and has already renovated, tested, and implemented 13 of its 15 mission-critical systems; (13) however, it has not yet completed contingency plans necessary to ensure business continuity in case system renovations or replacements are not completed in time or do not work as intended; (14) compounding this problem is the fact that OTS has not developed a comprehensive year 2000 conversion program plan providing a clear understanding of the interrelationships and dependencies among the automated systems that support, for example, its supervisory functions, office equipment, and facilities; and (15) such a plan provides added assurance that all systems are assessed.
The distribution of food, including processing of food into different products, can be extensive and complex, with school districts receiving products from various sources. Once a food is produced by a particular company, it can travel to distributors, retailers, and/or processors before reaching schools. Sometimes large school food authorities can receive food directly from the originating company, but it is more typical for the food to travel through these middlemen. This complex distribution path can make it difficult to track food from beginning to end, a problem which arises during food recalls when distributors, processors, and retailers must determine and inform states and school districts which products were produced with recalled foods and which were not. Because this identification process does not occur all at once, FNS, states, and school districts sometimes learn about affected products over time (see Fig. 1). One component of the food distribution system that adds to the complexity of tracking individual ingredients is processing, whereby companies turn a food into one or more new foods. For example, according to USDA, Westland/Hallmark sent much of the commodity ground beef that it produced directly to processors and, sometimes, distributors. Distributors sent the beef to school districts, while further processors used the ground beef to create products schools can more readily use, such as meatballs and hamburger patties. Processors then sent these products to school districts, either directly or through distributors. Federally subsidized school meal programs, such as the National School Lunch Program, are administered by USDA’s Food and Nutrition Service (FNS), but several other USDA agencies are involved in procuring foods for the programs. FNS works with states to administer the school meal programs through local school food authorities. FNS subsidizes the school meal programs through donated commodities and cash payments. USDA’s Agricultural Marketing Service (AMS) purchases commodities such as beef, poultry, fish, egg products, fruits, and vegetables, while the Farm Service Agency (FSA) purchases commodities such as grains, peanut products, dairy products, and oils for the school meal programs and other commodity distribution programs. FNS officials estimate that almost 50 percent of these commodities are further processed. In some instances, USDA contracts directly with processors, while in other instances, states and school districts contract with processors and USDA diverts commodities to processors on the schools’ behalf to make specific foods, such as sending commodity beef to a processor to be turned into beef for tacos. Schools also use federal cash subsidies and their own operating monies to procure food and processed food products commercially, not involving USDA agencies. Food holds and recalls to protect consumers are governed by various laws, regulations, and policies. There are a series of events that typically precede a food hold or recall. Federal agencies—FSIS, FDA, or CDC—can become aware of a problem when a company identifies a problem and independently announces a recall, through inspections, product testing, or an outbreak of a suspected food-borne illness. CDC works with state health departments to identify the specific food or product involved. Once the product and its source are identified, either FSIS or FDA—whichever has jurisdiction over the product—works with the affected company to conduct a food recall. Neither FSIS nor FDA has “mandatory recall authority”—the ability to force a company to recall a product. However, both FSIS and FDA can request that a company recall a product and, in most cases, the company complies. Either FSIS or FDA then classifies the recall from Class I to Class III: Class I: A recall of food that poses a reasonable probability of causing serious, adverse health consequences or death. The PCA peanut product recalls were designated Class I because of the presence of Salmonella; the New Era canned vegetables recall was Class I because of the potential for botulism contamination. Class II: A recall of food that poses a remote probability of adverse health consequences. The Westland/Hallmark beef recall was designated as Class II because of a remote probability of adverse health consequences due to proper inspection procedures not being followed at the meat processing plant. Class III: A recall of food that will not cause adverse health consequences, but does not meet product specifications. For example, a product that might contain the presence of an undeclared, otherwise safe substance, such as excess water. When a USDA commodity product is identified in a recall, FSIS or FDA contacts FNS. FNS then works with AMS or FSA to obtain more information on the affected commodity products. FNS then contacts the state agencies to whom it provided the product. The state agencies then notify school districts, who then notify the responsible persons at individual schools. Under USDA procedures, FNS is directed to notify states within 24 hours of learning of a recall, and then the states are expected to notify schools within 24 hours of receiving a recall notice from FNS. This process is used only when USDA commodities are involved, which account for 15 percent to 20 percent of the products used in school meals (see Fig. 2). If a state agency has FNS divert bulk commodity products on its behalf to a processor and the commodity is subsequently recalled, the appropriate procurement agency notifies the processor to which the commodity had been diverted. FNS does not alert the states as to which processors were affected. If a state or school food authority procures food commercially, which accounts for 80 percent to 85 percent of products used in school meals, neither FNS, FSIS, nor FDA is responsible for notifying states and schools; the school food administrator is typically notified directly by a distributor, wholesaler, or whoever sold the school district the food. Once schools are notified, recalls can expand if investigations reveal problems with products, in addition to those initially recalled. For example, FDA or FSIS may discover that problems at a particular manufacturing plant are more longstanding than initially thought. In these instances, the recalling firm could issue additional recalls for other products or time periods. As a result, schools could end up serving affected products between the first and subsequent recalls. In this report, we address holds and recalls by four companies which affected schools. From January 2009 through March 2009, PCA issued a recall—and expanded the recall on three separate occasions—for products it supplied. The companies that received or used its products also issued recalls, covering almost 4,000 types and brands of peanut-containing products. Recalls were initiated after CDC, FDA, and state investigations of illnesses suspected of being food-borne revealed Salmonella in peanut butter manufactured by PCA. Salmonella is an organism that can cause severe illness, particularly in the elderly, young children, and others with weakened immune systems. Since peanuts are under its purview, FDA posted PCA’s recall notices and monitored the recall as it developed. Schools in four states—Arkansas, California, Idaho, and Minnesota— received recalled commodity peanut products through the school meal programs that had not been further processed. In addition, commodity peanut butter was shipped to a further processor, which then distributed effected processed products to other states. In January 2008, an animal protection organization released an undercover video of persons trying to force non-ambulatory cows to stand and walk at the Westland/Hallmark meat processing plant in Chino, California. Because of the mistreatment of the cattle, on January 30, 2008, FNS issued a 10-day hold on all commodity ground beef produced by Westland/Hallmark since October 1, 2006. On February 8, 2008, FNS extended the hold for 10 additional days. On February 17, FSIS announced a recall by Westland/Hallmark, designated as a Class II recall, of more than 143 million pounds of beef produced over a two-year period from February 1, 2006, to February 2, 2008, because proper inspection procedures were not followed when cows that had become non-ambulatory were not reinspected before they were slaughtered. There were no problems found during FSIS testing of meat that was delivered for school meal programs, but concerns remained among Congress and others because non- ambulatory cows may pose an increased risk of bovine spongiform encephalopathy, also known as mad cow disease, linked to a rare but fatal degenerative brain disease in humans. FNS estimated that over 7,000 school districts in 46 states and the District of Columbia were involved in the recall of commodity beef products. FNS also estimated that approximately 50 million pounds of suspect Westland/Hallmark commodity ground beef was provided to schools, of which approximately 30 million pounds were served prior to the recall and about 20 million pounds destroyed as a result of the recall. FSIS and FNS were not aware of any schoolchildren or any other persons getting sick from eating the recalled beef. The New Era Canning Company issued a recall in December 2007 and expanded this recall on three subsequent occasions in early 2008, covering numerous types of New Era canned vegetables. These products had been distributed nationwide as part of the USDA commodity program and were sold commercially under 10 different brand names over a five-year period. These products were recalled because the vegetables had not been adequately heated during the canning process and could have contained a bacterial toxin which causes botulism, a potentially life-threatening illness. According to FDA and FNS officials, there were no reported illnesses attributed to recalled products and FDA reported that no toxins were found in product testing. The multiple recalls were the result of FDA, the Michigan Department of Agriculture, and New Era identifying additional products and time periods that could be affected. FNS officials reported that schools in 37 states received New Era products through the USDA commodity program. Schools received 516,432 cases of the recalled canned beans, but had only 13,931 cases remaining at the time of the recall. It is unknown how many cases states and school districts purchased commercially. On December 3, 2007, FNS issued an administrative hold on Glacier Sales potato rounds because of texture, taste, and odor issues. FNS officials said that Glacier Sales subsequently withdrew the product and worked with school districts to arrange reimbursement and/or replacement. FNS reported that 5 states had schools that were affected and that 6,480 cases of the product were involved during the hold, though additional states were affected once the company issued a withdrawal notice. Subsequent testing of the potato rounds found no health or safety problems. As a result of a number of factors, FNS did not always ensure in our three recall cases that states and schools received timely and complete notification about suspect food products provided to schools through the federal commodity program. First, USDA has procedures that explicitly allow FSIS to provide FNS with immediate notification of investigations that could involve commodity products, which could allow FNS to issue a precautionary hold on the suspect product, but FDA and FNS do not have similar formal protocols. Second, in two recent recalls we reviewed, FNS followed the lead of the FDA, and removed foods from school meals when they were officially recalled, but did not work with FDA and the USDA procurement agencies to place a hold on the products when it first became aware of food safety issues at facilities that supplied commodities. Third, in its recall notices, FNS did not provide complete and accurate available information that would be needed by schools to identify all affected products in their inventory, particularly for processed products. In addition, states did not always provide schools with timely and complete information. FNS tried several mechanisms to provide information directly to schools; however, these did not work as intended either for content or timeliness. As a result, in some cases, schools served affected products in school meals. FNS is aware of these factors, and is taking a number of steps to improve its processes. When FSIS learns a food within its regulatory jurisdiction—such as meat or poultry products—may be adulterated or mislabeled, USDA procedures allow for immediate notification of FNS. FSIS alerts FNS and the procurement agency, such as AMS, that there is a potential recall. In consultation with others, FNS determines whether to put a temporary hold on the product. If FNS decides to issue a hold, it notifies states and schools so they can remove the commodity products from school menus, pending additional testing and data collection. FSIS convenes a committee which, when commodities are involved, includes representatives of FNS and other agencies. In the case of the Westland/Hallmark beef recall, FNS placed a hold on commodity beef products from the California plant prior to the publicly announced recall; however, in this case, the hold did not result from communication with FSIS. Instead, FNS officials said that following the media coverage of inhumane practices at the plant, they consulted with AMS and initiated a hold on January 30, 2008, for beef products produced at the Westland/Hallmark plant. However, rather than placing a hold on all products produced at the plant; the hold only covered products produced after October 1, 2006. FSIS officials said that they did not have an ongoing investigation at the time, but that a USDA investigation was started soon after. According to FNS officials, they were subsequently included in FSIS recall discussions, and on February 17, FSIS announced the recall. The recall covered a longer time frame than the FNS hold—including all beef produced after February 1, 2006—as a result, some schools could have served recalled beef produced between February 1, 2006, and October 1, 2006, during the FNS hold, even though this beef was later recalled. Although FNS works to help ensure the safety of USDA commodities that may be served in schools, FNS stated that it is not responsible for taking food safety actions for products commercially procured by schools. This distinction led to confusion and potential risk of consuming affected products when schools purchased Westland/Hallmark beef commercially during the FNS hold on Westland/Hallmark commodity beef. For example, a school district in California told us that during the FNS hold, some of its processors believed that Westland/Hallmark commercial products were safe, claiming that only Westland/Hallmark commodity beef was affected. School district administrators said they explained to the processors that they did not want to receive any Westland/Hallmark product, and commercial products were subsequently included in the FSIS recall, suggesting the school district, had it believed the processors, would have served the suspect meat to school children. Unlike FSIS procedures, FDA procedures do not specifically provide for immediate notification of FNS when FDA investigations include commodity products, although agency officials stated that they communicate frequently. FDA is responsible for the safety of virtually all food products, except for meats, poultry, and processed egg products. FDA procedures require FDA notify USDA agencies, including FNS, “of recalls of FDA-regulated products that have been distributed to any USDA agency that may have involvement with the school lunch program.” However, the procedures do not give any indication that FNS can be included in the recall deliberations, as they are when an FSIS food is concerned. According to FDA officials, FNS was included in discussions and email correspondence during the investigation of the Salmonella outbreak that was traced to peanut products, but FDA did not provide us with information about notifications provided to FNS during the investigation of the New Era plant. According to FNS and FDA officials, they are working together with AMS and FSA officials on developing a memorandum of understanding that will provide for specific notification to FNS, AMS, and FSA during FDA investigations that may involve commodities intended for school meal programs. However, the agencies have not established a time frame for completing the memorandum of understanding. FNS and USDA procurement agencies determined whether commodity products were involved after receiving FDA announcements of recalls of New Era and PCA products. For the initial PCA recall on January 13, 2009, FNS officials said that the FSA, which procures food for USDA commodity programs, checked for commodity peanut butter purchases for school meal programs from PCA’s Blakely, Georgia plant, and found that there were none within the time period identified in the notice, so FNS did not notify states to take any precaution with commodity peanut products. Subsequently, after two FDA announcements of recall expansions, on January 23, 2009, FNS posted to its Web site a statement that none of its commodities were affected by the PCA recall. Five days later, on January 28, 2009, following additional inspection and review at the Blakely plant, FDA announced another PCA recall notice, which expanded the manufacturing dates and products subject to recall. Upon learning of the January 28 expanded recall, FNS worked with FSA to determine if commodities were affected. FNS informed the affected states of the recalled commodity products the following evening. Similarly, after the first New Era recall announcement in December 2007, FNS officials said that AMS checked for commodity canned bean purchases from New Era and found that it had purchased other products from New Era, but not those that were part of the recall, so FNS did not notify states to take precautions with New Era commodity products. In January 2008, New Era expanded its recall to include additional products. FNS worked again with AMS which, this time, determined that commodity products were affected. The following day, FNS informed affected states of the recall. FNS did not issue administrative product holds after it was notified about initial recalls of New Era and PCA products. In both the PCA and New Era situations, the initial recalls did not include commodity products; but in both cases, commodity products were eventually recalled because the recall was expanded either to include products manufactured over a longer time period or to include more products manufactured at the same plant. USDA hold and recall guidance does not indicate what factors and criteria FNS should consider when determining whether to institute an administrative hold. FNS, in consultation with the responsible procurement agency, could have placed a hold on all commodity products produced by these companies when it became aware of a potential food safety issue, regardless of when the products were produced, particularly given the serious health risks of botulism and Salmonella potentially posed by the recalled products. Instead, FNS relied strictly on the recall notices and only notified schools about the potential hazards of commodity products after the firms had expanded their recalls to specifically include products purchased through the commodity program. Because FNS did not immediately place a hold on all PCA peanut products and New Era canned vegetable products at the time of the initial recall, children may have possibly consumed these products through the school meals programs—products that were later included in the expanded recalls. According to the CDC, of the 691 individuals sickened, 226 were school-aged children, of which 46 were hospitalized due to consuming Salmonella-contaminated peanut products. CDC does not have information on how many of the children may have consumed the products in school. In the Westland/Hallmark case, FNS officials said that they notified states on the same day they learned of the recall affecting commodity products, but FNS’s initial recall communication did not provide states with complete and accurate information that was needed by schools to identify all affected products on their shelves. The initial recall communication issued by FNS informed states that the products that had been subject to hold were now being recalled, but did not inform states which specific processed Westland/Hallmark beef commodity products, such as sloppy joe mix, frozen beef patties, and other items offered by FNS to states to order for school meal programs, were also subject to the recall. It was not until February 26, 2008—almost four weeks after the original hold was issued—that FNS notified states that these further-processed products contained recalled beef. The longer recalled products remain unidentified, the greater the risk that these products could be inadvertently consumed. FNS also did not provide states and schools with information to identify the processors of products containing Westland/Hallmark beef in instances where commodity beef ordered by states in bulk from FNS was provided directly to processors. In addition to allowing states to order processed commodity products from USDA, the Department also allows states to have FNS bulk commodities, such as beef, diverted directly to processors of the state’s choosing for further processing. During the Westland/Hallmark hold, FNS notified further processors, providing them information that allowed them to identify affected beef products. FNS also advised states in its recall instructions to contact their processors to determine if their state or schools had received further processed food containing recalled beef. USDA’s procedures do not specify how and when processors are to inform states and schools of recalled products and, as in the Westland/Hallmark recall, FNS officials said that they did not oversee this notification to ensure that further processors promptly inform states and schools. Moreover, although FNS knew which further processors received affected Westland/Hallmark beef, it did not provide the names of these further processors to states and schools, because FNS considers it the responsibility of the processor to contact consignees, in this case states and schools. As a result, states and schools had to wait for further processors to identify and inform them of affected products. Some school food administrators told us that they received information from further processors for some products weeks after the initial Westland/Hallmark holds announcements, during which time affected products were served in some school meals. Moreover, in its initial administrative hold notice, FNS did not alert states that further processors often commingle beef from multiple sources to create end products, which means that states and schools could receive affected end products, even if the bulk beef they diverted to further processors came from a plant other than Westland/Hallmark. After the Westland/Hallmark administrative hold was announced, identifying the affected beef, officials in one state said they assumed all its further processed beef products were not affected, because it had not had FNS divert Westland/Hallmark beef to processors on its behalf. However, almost three weeks after the hold announcement, the state said it learned from FNS that beef processors often commingle commodity beef and realized some of its further processed products were made, in part, with affected beef from other states. Due to the confusion, schools in the state had likely been serving products in school meals for several weeks which should have been put on hold. FNS officials told us that they are in the process of rewriting the USDA recall procedures and it will address processors and further processed products; however, FNS officials said that they have not established a time frame for completion. Although USDA procedures direct states to notify affected schools within 24 hours of receiving a recall notice from FNS, states did not always forward the information within this time frame and schools sometimes received critical information days later. FNS announced its administrative hold on Westland/Hallmark beef on January 30, 2008, but in one state, a school official told us that she did not hear about the hold from the state’s technical assistance office until five days later, on February 4, 2008. Similarly, after the Sunday, February 17, 2008, USDA announcement of the Westland/Hallmark beef recall, officials in four of the five states we interviewed said they did not notify schools until after the Monday holiday, on Tuesday, February 19, Wednesday, February 20, or Thursday, February 21. Officials in one state in which schools were open on the Monday federal holiday said that they were unable to provide information schools requested, because the FNS regional office was closed for the holiday. For the New Era recall of canned vegetables, officials in another school district told us they found out about the January 18, 2008, recall when FDA investigators showed up at the school five days later, on January 23, 2008, to check their compliance with recall procedures; FDA investigators and school officials did not find any affected product remaining in inventory. Later that same day, school district officials said they received an email from the state informing them they had received a truckload of affected canned green and garbanzo beans several years before. State officials said they did not initially forward information about the recall because they assumed that the product was so old it was likely consumed. After receiving information about the Westland/Hallmark hold from FNS, indicating that further processors were responsible for notifying states of further processed products containing Westland/Hallmark beef, states gave different instructions to school districts on what to do about the hold, resulting in different responses. For example, at the beginning of the beef hold, one state said that it instructed its school districts to place all processed beef products on hold until processors had time to figure out which items were affected and which were not. As a result, this state’s schools had all affected beef on hold. On the other hand, according to a school district in a different state, the state did not instruct its school districts to place all beef products on hold and state officials did not initially realize that some processed products could also be affected. A few days after the initial hold announcement, state officials determined processed products from one processor could be affected and sent school districts an email informing them that many additional processed items were subject to the hold. As a result, a school district in this state told us that its schools may have served affected products in the interim. Some school districts took the initiative to hold suspect products, pending final notification about all products affected by the recalls. In the case of Westland/Hallmark, some schools told us that media and parent inquiries about the safety of the meat served in schools prompted them to remove all beef from their school lunch menus after the initial recall. One school district in California, in an abundance of caution, did not serve beef for the remainder of the school year. Because they stopped serving any beef products after the recall announcement, these school districts did not risk serving products, including processed products, that were later identified as the recall unfolded and expanded. Supplemental notification methods provide the potential for FNS to communicate recall information directly to schools in a more timely manner than under the standard notification procedures. The standard USDA procedures allow FNS 24 hours after learning of a recall involving commodities to notify states, and then allow an additional 24 hours for states to notify schools. Under this standard notification process, schools might not learn of a recall until 48 hours after it was announced by FSIS or FDA, during which time, schools could unknowingly serve affected products. Although FNS could explore ways to reduce the standard notification time frames, supplemental notification methods providing information directly to schools, such as through email and Web site postings, could potentially provide schools with more timely information. Because of the breadth of the recall, FNS officials said that they used the U.S. Department of Education’s crisis communication email system to send email alerts directly to all schools about the Westland/Hallmark beef recall; but, this additional notification did not seem to improve communication to schools. FNS officials said this was the first time that they had used Education’s crisis system to ensure schools received prompt notification. However, this communication was not sent until February 22, 2008, more than 3 weeks after FNS had placed the commodity beef on hold and 5 days after the recall was publicly announced. FNS also employed its own newly-developed commodity alert system to notify school districts directly about the PCA peanut product recall, but the system did not appear to improve the content or timeliness of communications to schools. FNS’ Commodity Alert System was designed to email “instant notices” on food safety issues to registered subscribers. According to FNS, the system was first used January 30, 2009, to communicate that the PCA peanut product recall included commodity products. However, the email was not sent until 2 days after FDA publicly announced an expanded recall of products containing suspect peanuts. More importantly, the email to subscribers did not identify the affected commodity products by name or the states or schools receiving them, but simply stated that “a limited number of products were identified as being purchased by USDA.” FNS said it did not include information on products or states affected because alert emails could not exceed 300 characters of text. FNS subsequently assessed how many of those who signed up for the service successfully received the January 30, 2009, email alert and found that 37 percent of those who completed the initial registration and who could have expected email alerts on important food safety problems did not receive the email due to problems with their registrations. FNS stated they would take steps to improve the registration process. In a subsequent alert, sent on March 17, 2009, regarding expanded PCA recalls of commodity peanut butter, FNS stated that two recall notices had been issued 20 days and 14 days earlier because USDA purchased peanut butter associated with the PCA Plainview, Texas plant. FNS has also used its Web site to communicate food safety information to states and schools, but recent postings have not been timely or complete. FNS’ food safety Web site notes, “Here you will find information on food safety and security related to the assistance programs administered by FNS, as well as links to FNS’ food safety partners,” and includes information under a heading, “Current Initiatives and Resources.” However, we found only a single posting for the New Era canned vegetables recall and it addressed only the initial New Era recall and the first recall expansion, not the second and third recall expansions that involved commodities. According to the Web site, “No USDA-purchased commodities are involved at this time.” The Web site did not inform states, schools, parents, and the public that two subsequent New Era recalls did include USDA-purchased commodities. For the PCA recall, FNS posted a statement on January 30, 2009, 2 days after FDA publicly announced an expansion of the recall, to say that a limited number of recalled products were identified as USDA purchases. However, the announcement did not say whether schools were affected, which states were affected, or what products were affected. Another USDA statement posted to the FNS Web site on March 6, 2009, explained that 10 days earlier, FNS had learned that commodity peanut butter purchased from Sunland Inc., and distributed to schools was made from peanuts roasted at PCA. FNS provides disposal instructions to states that are specific to each recall; these instructions are then tailored by each state to meet state or local public health procedures. For example, for the Westland/Hallmark beef recall, FNS guidance instructed states and school food authorities with 50 cases or less to destroy the product on site, and render it unfit for human consumption by following guidance from state or local health authorities. If states or school food authorities had more than 50 cases, FNS guidance said to take the product to a landfill, have it incinerated, or send it for inedible rendering. States often revised the FNS notice before sending it on to school districts by changing the listed contacts or including additional disposal instructions specific to the state. For example, one state allowed its school districts to follow alternate methods of disposal suggested by local health departments. Although all school districts we interviewed that had recalled products in their inventories reported disposing of them, at least two school districts did not follow all instructions provided by FNS and state officials. For example, a school district official in one state told us her staff destroyed recalled New Era canned beans that had been opened by pouring the contents down the garbage disposal. FNS and FDA’s notices said not to open cans and the FNS notice said further, that if cans were already open, not to put the contents in a garbage disposal because of the risk of exposure to the toxin that causes botulism. Another district worked with its distributor, who was storing Westland/Hallmark recalled beef products, to divide the district’s recalled beef among its schools so the district did not exceed 50 cases at any one location. School district officials said this allowed them to dispose of the products on site, rather than make special arrangements with a landfill, as specified in FNS destruction instructions for school districts with more than 50 cases of recalled products. In some instances, the destruction and disposal of recalled product was delayed as school food administrators searched for a means of disposal, increasing the risk that these products could be inadvertently consumed. Five of the 15 school districts we interviewed that had affected Westland/Hallmark products in-stock reported challenges in disposing of affected beef products in landfills. For example, an official from one district found that the district’s trash pickup company would not take 15,000 pounds of affected beef because they did not accept food. There was no local landfill, and a neighboring town’s landfill also refused to take the beef. The food service director told us neither the state nor the city health department was able to help locate a disposal site. Finally, at the suggestion of someone in another state, the food service director arranged disposal at a landfill in another town, but the director had to arrange for delivery of the 15,000 pounds of meat to the landfill. The raw beef was buried, in accordance with state instructions. A school district in another state told us the city landfill would not accept raw beef; so after making inquiries, the food service director learned he could send the beef to a rendering company, which turns food into other products. For a fee, the rendering company collected the district’s 400 cases of raw beef from the district warehouse. However, the rendering company required that all beef be removed from its packaging, so the food service director and his staff spent a few hours opening 400 cases and separating meat from its wrapping. Figure 3 shows a large quantity of beef from one school district at a transfer station, prior to being transported to a landfill for disposal. Schools that had smaller quantities of recalled beef typically did not report difficulties in disposing of recalled products. FNS destruction instructions allowed school districts with smaller quantities to dispose of the suspect foods in their regular trash such as by opening packages, dousing the food with bleach, and double-bagging it to prevent consumption before placing it in the trash receptacle. School districts also reported that disposal of commercially purchased foods was simpler as the processor or distributor typically collected and disposed the recalled products. Officials at some of the school districts we interviewed told us it was their distributor or processor who informed them of commercial recalls, and then collected any affected product and/or stopped delivery. For example, one school district reported that its distributor collected and disposed of commercially purchased Westland/Hallmark beef. Some school officials told us they were not reimbursed for all costs incurred due to recalls. USDA guidance defines which expenses are reimbursable and which are not. Reimbursable expenses include some transportation, as well as storage, destruction, and processing costs. Schools and school districts are not reimbursed for administrative and personnel costs, including overtime paid to deal with a hold or recall, or for other foods purchased to replace recalled products. However USDA guidance did not specifically address whether states can be reimbursed for commodities that have been processed with recalled ingredients, leading to inconsistencies in reimbursement in the Westland/Hallmark recall. After the Westland/Hallmark recall schools were either reimbursed for the recalled beef or received a replacement. However, officials in one state told us its schools were not reimbursed for the cost of other commodities that had also been used in recalled processed beef products, such as commodity tomatoes used to make spaghetti meat sauce. In contrast, a school district in Texas was reimbursed for commodity cheese it had sent to a further processor, along with commodity beef to make burritos and taco snacks. FNS officials told us that it reimbursed states for all commodity products, such as tomatoes and cheese, used in further processed products that were subject to recall. Reimbursement and replacement for recalled commodity products varied by recall. For the Westland/Hallmark recall, school districts provided documentation on the quantity of recalled beef destroyed to the states, and the states served as the intermediary for FNS reimbursement and replacement. For disposal costs related to New Era recalled products, FNS officials said they reimbursed states for their disposal costs, who then reimbursed schools. Most school districts did not receive reimbursement or replacement of New Era products because in eight of the nine states that had recalled product, the quantities destroyed were so small, states did not request product replacement or reimbursement. FNS officials said that only one state had a significant amount of the recalled products and that this state requested reimbursement which FNS provided. Some school officials informed us that they found the overall reimbursement process confusing, and three states reported having to submit multiple claims. FNS general procedures and those specific to Westland/Hallmark did not explicitly describe all types of documentation necessary for reimbursement. One school district in Indiana reported that it was unclear what information was required for reimbursement and staff spent a lot of time removing the code stickers and other identifying labels from recalled products, thinking they would need to submit them to the state. They later learned the code stickers and labels were not required. The district submitted a claim, but was later asked by the state to submit additional documentation on disposal costs such as mileage and labor, so more staff time was spent assembling this information and resubmitting the claim. Some school districts also found the reimbursement process to be lengthy. USDA procedures direct that reimbursement to states occur within 90 days (3 months) of a recall, and that states, in turn, reimburse school districts “in a timely manner.” Districts in several states that were reimbursed for New Era and Westland/Hallmark claims reported that they did not receive payment until many months after the recalls. In at least one state, state officials reported that they received reimbursement more than 90 days after the Westland/Hallmark recall. After receiving reimbursement from FNS, states may also have contributed to delays in providing reimbursement to schools. For example, food service staff in California told us their district filed for reimbursement of about $42,000 in March 2008 for Westland/Hallmark beef but had not been reimbursed by their state as of November 2008, eight months later. California state officials told us that reimbursement was delayed, in part, because the state could not disburse payments until the budget was passed, which occurred in late September. Although both FSIS and FDA have procedures to systematically conduct and document quality checks to determine whether recalls are carried out effectively, the procedures did not ensure these checks were done in schools affected by recent recalls of USDA commodities. These checks, called effectiveness checks by FSIS and audit checks by FDA, involve visiting or contacting a sample of affected consignees—entities that received a recalled product, such as distributors, hospitals, restaurants, and schools—and determining whether they were notified of a recall; all affected product was located; affected product was properly disposed; and all steps were completed in a timely manner. These checks help ensure that affected products are removed from the market and are not consumed. Both FSIS and FDA conduct quality checks of a sample of consignees, however, procedures differ and neither ensures that a sample of schools is included. In an overall review of FSIS and FDA food recalls, we also previously reported that the agencies’ procedures for selecting the sample of companies to check did not ensure that all segments of a food distribution chain are included, as well as problems with the timeliness of the checks. FDA procedures do not require it to systematically monitor recalls in schools by explicitly sampling schools for audit checks, grouping consignees into categories, or reviewing audit checks by consignee category, such as schools. Nonetheless, many of the FDA audit checks for the New Era recalls were conducted in schools that may have received the product as a USDA commodity or procured it commercially. FDA officials said that although they are not required to do so, in this case, they tried to give schools preference for selection in the sample if a school was identifiable from the available information. The FDA district coordinator told us that of 2,553 completed audit check reports on the New Era recalls, 823 were for schools. The district coordinator was able to identify schools for whom audit check report forms were completed by the name on the audit form or because the person who completed the form wrote “school” under consignee type. “School” is not listed as one of nine consignee types on the audit check form, which includes “retailer,” “hospital,” and others. Our review of the audit checks of schools in one of the states we visited indicated some schools were not properly notified or had not followed recall instructions. Also, in the remarks section of some of the FDA audit check forms, the preparer indicated the recall for the school was “ineffective” or “not effective.” The FDA district coordinator for the New Era recalls said the completed audit check forms were grouped by category, including a category for schools, and that any problems that were identified on the forms were addressed. However, FDA did not have documentation of any analysis that was done for the schools as a group to determine whether there were systemic problems, nor did they have documentation of corrective actions taken. FDA officials said that they conducted audit checks for the PCA peanut product recalls, and field staff were instructed to give priority to schools in making their selections for the audit checks, but only schools that procured the products commercially were included because the audit checks specifically excluded schools that received affected peanut products only through the school meals program. FDA officials said that they rely on FNS to conducts its own checks of schools that received affected commodities for school meal programs. FDA instructions for conducting audit checks for the PCA recalls included special provisions for selecting schools and other facilities that served vulnerable populations. However, at the time of our contact with FDA officials, they did not know if schools had procured affected peanut products commercially or had been selected for audit checks and did not have an assessment of audit check activity to date for schools or other consignees. According to FDA, the analysis of audit checks typically occurs further into the monitoring phase of the recall, closer to the termination phase of the recall. FSIS procedures explicitly allow for grouping those to be contacted for effectiveness checks in categories such as schools, and selecting consignees from each category to create its sample. However, after the Westland/Hallmark recall, FSIS did not create a school category for its effectiveness checks, even though thousands of schools were affected. FSIS did ask FNS to provide names of schools and states affected by the Westland/Hallmark recall of commodity beef and received a list of over 7,000 affected school districts, but FSIS officials did not use this information to include the schools in its effectiveness checks. FSIS effectiveness checks for the Westland/Hallmark beef recall did not include any schools that received the beef through the commodity program. FSIS estimated there were 9,500 consignees who received recalled Westland/Hallmark commercial beef, not including schools and others that had received Westland/Hallmark commodity beef for federally-subsidized food programs. FSIS officials said they did not know how many of the 9,500 consignees that had procured beef commercially were schools. FSIS determined its statistical sample would be 200 of the 9,500 consignees, using systematic sampling with a sampling interval of 47. Our review found 2 names of schools, the name of a preschool, and a school food distributor included in the sample; both schools that were selected procured the product commercially. FSIS officials said they did not specifically include schools receiving recalled commodity products in their checks for the Westland/Hallmark recall, and FDA officials said they did not specifically include schools receiving recalled commodity products in their checks for PCA recalls, because they said that USDA was responsible for conducting its own checks of schools receiving commodities. Although FSIS and FDA procedures direct them to monitor the effectiveness of recalls, they told us that they relied on FNS to conduct checks of schools affected by recalls of USDA commodity products; however, FNS does not conduct such effectiveness checks. FNS officials told us it was not their responsibility to check on the effectiveness of any of three companies’ recent recalls covered in this review, but that they relied on their regulatory partners, FSIS and FDA, to conduct these quality checks. FNS has authority to issue holds on USDA commodity products, in conjunction with regulatory and procurement agencies, but does not have procedures in place to conduct a systematic review of schools to determine whether schools received notice of the hold and followed instructions to keep the identified products from being served to students. FNS did not conduct such checks on its hold or hold extensions for Westland/Hallmark beef or Glacier Sales potato rounds. Protecting school children from food-borne illnesses in schools depends on the efforts of many local, state, and federal entities. Agencies within USDA and HHS, including FSIS, FNS, and FDA, have critical roles in identifying food safety issues, disseminating information, providing guidance, and evaluating the effectiveness of food holds and recalls. While these agencies have related policies and procedures in place, recent recalls of products, from raw beef to peanut butter, have highlighted the importance of revisiting these policies and procedures to ensure they accomplish what they intend. Nearly 700 people, including over 200 school-aged children, were sickened by Salmonella during a 2009 recall of Peanut Corporation of America products and ingredients. And while it is not known to what extent the source of the bacteria in these cases of illness were from a school snack or meal, federal and state agencies must ensure schools receive timely notification, adequate information, and clear instructions on food holds and recalls. Evaluations also must be conducted to determine the effectiveness of those efforts. School children are a vulnerable population, in part because they are more likely to suffer complications from food-borne illnesses, and in part because they may have less knowledge to make informed choices about the foods they consume. As such, USDA and HHS should make the policy and procedure changes necessary to ensure that the food children consume in schools is unadulterated and safe. The speed and complexity with which recalls unfold, often leading to multiple recalls of related products or covering longer manufacturing time frames, creates challenges for agencies and their efforts to protect consumers—particularly school children—from potentially harmful foods. Although FNS, in conjunction with the responsible USDA procurement agency, can issue an administrative hold on suspect products prior to a recall—an action taken in the Westland/Hallmark recall—the lack of criteria and guidance on when to issue a hold may have contributed to a conservative response to the New Era and PCA recalls, whereby FNS did not preemptively issue a hold on products that were later recalled. Absent guidance on when to issue administrative holds, FNS will continue to face challenges in deciding when to issue administrative holds. The ability to issue holds provides a valuable tool that allows FNS to act quickly to protect school children while investigations are ongoing. In addition, FNS and FDA officials said they are working on a memorandum of understanding about how the agencies will communicate during FDA food safety investigations. Such a document could provide FNS with important information when it considers administrative holds of suspect commodity products used in school meal programs. But, no time frame has been established for completing it. Gaps in the protocols federal agencies follow in communicating with each other and gaps in states then communicating with schools districts have led to delays in schools receiving notice of recalls and sufficient information on what actions to take. These delays were, in some instances, exacerbated by difficulties in identifying processed foods that contained recalled ingredients, in part, because federal hold and recall guidance does not explicitly address the role of processors or distributors. As a result, some affected commodities were served to school children after holds and recalls were announced. In addition, insufficient guidance on disposal procedures for recalled products increased the risk that they could be inadvertently consumed. FNS officials said they have plans to address the role of processors and update the hold and recall procedures for USDA commodities, but have not established a time frame for completing the revisions. It is important for FNS to make changes to the procedures as soon as feasible to avoid confusion and delays the next time a major recall occurs that involves processed products. Given the current technology for almost instant communication, it seems federal regulators could disseminate information through states to schools and directly to schools more quickly than the standard procedures, which permit up to 48 hours to elapse by the time FNS communicates with states and states communicate with schools. New strategies for federal regulators to communicate directly with schools, such as the FNS Commodity Alert System used for the PCA recalls, are promising but have yet to deliver timely or complete information. Further, although FSIS and FDA perform checks of how effectively recalls are carried out, neither agency systematically monitors or evaluates holds and recalls in schools. While FDA selected some schools for its New Era recall audit checks, it did not document its analysis of audit checks conducted at schools, nor did it track corrective action taken as a result of its audit checks of schools. Unless FSIS, FDA, and FNS revise their assessment procedures, these agencies will not be able to determine if additional actions are necessary to keep school children safe. We have previously reported that food safety oversight is a complex and fragmented system requiring major improvements. Yet smaller, immediate improvements in coordination, notification, and evaluation procedures in the near term could better equip states and schools to protect their students from unsafe foods. To better ensure the safety of foods provided to children through the school meal programs, we recommend the Secretary of Agriculture and the Secretary of HHS take 12 actions to make improvements in three areas related to recalls affecting schools: interagency coordination; notification and instructions to states and schools; and monitoring effectiveness. We recommend the Secretary of Agriculture direct FNS and that the Secretary of HHS direct FDA to jointly: establish a time frame for completing a memorandum of understanding on how FNS and FDA will communicate during FDA investigations and recalls that may involve USDA commodities for the school meal programs, which should specifically address how FDA will include FNS in its prerecall deliberations. We recommend the Secretary of Agriculture direct FNS to: develop guidelines, in consultations with AMS and FSA, to be used for determining whether or not to institute an administrative hold on suspect commodities for school meal programs; work with states to explore ways for states to speed notification to improve the timeliness and completeness of direct communication between FNS and schools about holds and recalls, such as through the commodity alert system; take the lead among USDA agencies to establish a time frame in which it will improve the USDA commodity hold and recall procedures to address the role of processors and determine distributors’ involvement with processed products, which may contain recalled ingredients, to facilitate providing more timely and complete information to schools; revise its procedures to provide states with more specific instructions for schools on how to dispose of recalled commodities and obtain timely reimbursement; and institute a systematic quality check procedure to ensure that FNS holds on foods and products used by schools are carried out effectively. We recommend the Secretary of Agriculture direct FSIS to: revise its procedures to ensure that schools are included in effectiveness checks. We recommend the Secretary of HHS direct FDA to: revise the Recall Audit Check Report form to include a consignee prompt for schools; revise FDA procedures to ensure schools are included in audit checks, either by drawing a separate schools-only sample or providing a selection preference for schools; and revise FDA procedures to ensure analysis of its audit checks is documented, and any problems with recalls or audit checks affecting consignees involved with schools identified and acted upon. We provided a draft of this report to USDA and HHS for review and comment. USDA stated that it generally agreed with and supported the recommendations of the report and provided additional information on the roles and responsibilities of all stakeholders involved in assuring the safety of food provided by USDA through its nutrition assistance programs. We have reprinted USDA’s comments in their entirety in appendix I. HHS stated that it agreed with the recommendations of the report and that GAO has raised important issues regarding the safety of foods provided to children through the school meals programs. We have reprinted HHS’s comments in their entirety in appendix II. Both USDA and HHS also provided technical corrections to the report which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will send copies of this report to the Secretary of Agriculture, the Secretary of Health and Human Services, the Secretary of Education, and relevant congressional committees. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. In addition to the contact named above, Kathryn A. Larin, Assistant Director; Sylvia Arbelaez-Ellis; Susan L. Aschoff; Susannah L. Compton; Jean L. Cook; Sarah A. Farkas; Alexander G. Galuten; Nisha R. Hazra; Lise L. Levie; Daniel S. Meyer; and Charles E. Willson made key contributions to this report.
Over the past few years, several food recalls, such as for beef and peanut products, have affected schools. It is especially important that recalls affecting schools be carried out efficiently and effectively because young children have a higher risk of complications from food-borne illnesses. GAO was asked to determine how federal agencies (1) notified states and schools about food recalls, (2) advised states and schools about disposal and reimbursement of recalled food, and (3) ensured that recalls were being carried out effectively. To do this, GAO reviewed and analyzed relevant documents and interviewed federal and state officials, as well as officials from 23 school districts that had experience with at least one of four recent cases involving the safety of food in the school lunch program. Despite its efforts, the U.S. Department of Agriculture's (USDA) Food and Nutrition Service (FNS), which oversees federal school meals programs, did not always ensure that states and schools received timely and complete notification about suspect food products provided to schools through the federal commodity program. The federal commodity program provides food to schools at no cost to the schools, and accounts for 15 to 20 percent of food served in school meals. During 3 recent recalls, FNS notified states, but in only one case did it inform schools to hold and not serve suspect foods prior to an official recall of commodity products. When a videotape aired by the media showed inhumane treatment of cattle at a plant that provided beef to the commodity program, FNS told states to have schools stop serving the company's beef weeks before the official recall of commodity beef was announced. However, when the U.S. Department of Health and Human Services' (HHS) Food and Drug Administration (FDA) recalled suspect peanut products and canned vegetables in two other cases, FNS did not inform states and schools to hold and not serve the companies' commodity products until the recalls were expanded to include the companies' commodity products--weeks later. FNS's initial notification to states regarding recalls did not provide complete information on the full range of products affected. Instead, states and schools continued to receive information on multiple other recalled products over time. It sometimes took states and schools a week or more to determine what additional products were subject to a recall, during which time they unknowingly served affected products. FNS provided instructions for disposal and reimbursement of recalled products to states who, in turn, provided instructions to schools but, nonetheless, some schools experienced problems. Some schools reported to GAO problems in finding landfills that would accept large quantities of recalled products. Some schools also reported that reimbursement instructions were not clear, reimbursement was delayed for months, and that all of their expenses related to the recalls were not reimbursed. Although both USDA's Food Safety Inspection Service (FSIS) and the FDA procedures direct them to conduct recall quality checks, neither included thousands of schools that had received recalled USDA-commodities products for the beef and peanut recalls because they thought FNS conducted these checks. As a result, they were unable to ensure that the recalls were being carried out effectively by schools. FNS officials said that they did not conduct any kind of systematic quality checks of schools receiving recalled commodities, because they relied on FSIS and FDA to conduct such checks. FDA did include schools in its canned vegetable recall audit checks, and some may have received recalled-commodity canned vegetables. However, because FDA does not systematically sample for schools or analyze results of the quality checks for the group, the agency cannot be assured that the recall was carried out effectively in schools.
Overall, NTSB has made progress in following leading management practices in the eight areas in which we made recommendations in 2006. Our recommendations are based on leading practices identified through our government wide work that are important for managing an agency. Although NTSB is a relatively small agency, such practices remain relevant. Figure 1 provides a summary of NTSB’s progress in implementing our 12 management recommendations. Among the areas that NTSB has made the most progress is improving communication from staff to management, which should help staff and management build more constructive relationships, identify operational and work-life improvements, and enable management to better understand and respond to issues faced by investigators and other staff. The agency managers have, for example, hosted brown bag lunches with staff to facilitate communication and conducted periodic surveys of employees to determine, among other things, their level of satisfaction and ways to improve communication. In addition, NTSB has made significant progress in improving its strategic planning and human capital management, and progress in developing an information technology (IT) strategic plan. For example, NTSB has revised its strategic plan to follow some performance- based requirements, and it has developed strategic human capital and IT plans. Although these plans still offer room for improvement, they establish a solid foundation for NTSB to move forward, both broadly as an agency and specifically with respect to IT efforts. In addition, NTSB has made significant progress in improving its knowledge management (i.e., a way for it to create, capture, and reuse knowledge to achieve its objectives). While the agency has adopted a strategy for knowledge management activities and hired a chief information officer (CIO) to implement policies and procedures on information sharing, until NTSB completes its strategic training plan, which NTSB has told us will include a knowledge management component, the implementation of NTSB’s knowledge management strategy will be unclear. To its credit, NTSB has taken some steps to improve its training activities, such as hiring a training officer in April 2007 and requiring all staff to complete individual development plans aimed at improving their capabilities in support of the agency’s needs; however, NTSB does not expect to complete a strategic training plan until later this year. In addition, NTSB’s core competencies and associated courses for its investigators lack sufficient information on the knowlegdge, skills, and abilities for each competency to provide assurance that the agency’s training curriculum supports its mission. NTSB has also improved some aspects of its financial management by correcting a violation of the Anti-Deficiency Act related to purchasing accident insurance for employees on official travel, making progress toward correcting another violation of the Act related to lease payments of its training center, and receiving an unqualified or “clean” opinion from independent auditors on its financial statements from fiscal years ending September 30, 2003, through 2007. However, NTSB has made limited progress in developing a full cost accounting system to track the time employees spend on each investigation and in training. It intends to request funding to begin this effort in fiscal year 2010. Without a full cost accounting system, project managers lack a comprehensive means to understand how staff resources are utilized and to monitor workload. Until NTSB improves its financial management and develops a strategic training plan, it will miss the opportunity to better understand how its limited resources are applied to activities that support the agency’s mission, such as accident investigation, as well as individual staff development. In addition, a provision of NTSB’s reauthorization proposal would exempt the agency from the Anti-Deficiency Act and allow it to incur obligations both for the acquisition and lease of real property in advance or in excess of an appropriation. If Congress decides to grant this exemption, we suggest more narrow authority that addresses NTSB’s particular need to obtain a new lease for its headquarters when the current lease expires in 2010. For example, authority to enter into leases for up to a specified number of years using annual funds over the term of the lease would be a more appropriate option. Typically, federal agencies do not require such an exemption because they rent real property through the General Services Administration (GSA), which has realty specialists, staff knowledgeable about the leasing market, and experience in lease administration. As part of the fee that GSA charges agencies (7 percent for NTSB), agencies have the ability to walk away from a lease with 120 days notice. If NTSB does not lease through GSA and instead is granted delegation authority to deal directly with lessors, it might not have the 120- day agreement and would be responsible for all aspects of negotiating and administering its leases. NTSB has improved the efficiency of activities related to investigating accidents, such as selecting accidents to investigate and tracking the status of recommendations, but it has not increased its use of safety studies (see fig. 2). Since 1997, NTSB has issued about 2,400 recommendations. The agency has closed about 1,500 (63 percent) of those recommendations, and of those it closed, 88 percent were closed with the agency having taken acceptable action, while 12 percent were closed with an “unacceptable” status. NTSB is required by statute to investigate all civil aviation accidents and selected accidents in other modes—highway, marine, railroad, pipeline, and hazardous materials. NTSB has improved its process for selecting accidents to investigate by developing transparent, risk-based criteria for selecting which rail, pipeline, and hazardous materials accidents to investigate and which aviation accidents to investigate at the scene, or remotely, in a limited manner. The completion of its effort to develop similar criteria for marine accidents will help provide assurance and transparency that the agency is managing investigative resources in a manner that ensures a maximum safety benefit. NTSB has also made significant progress in improving its recommendation close-out process by working to automate this process by the end of this fiscal year. Completion of the automation should help speed the process and aid the expedient delivery of information about recommendation status to affected agencies. In addition, NTSB has begun to identify and share best practices for accident investigations among investigators in all transportation modes. These activities, when fully implemented, will help to ensure the effective and efficient use of agency resources. In contrast, NTSB has not increased its utilization of safety studies, which provide analyses of multiple accidents and usually result in safety recommendations. NTSB officials told us that the agency does not have enough staff to increase the number of safety studies and, therefore, they hope to identify more cost effective ways to conduct the studies. We believe that greater progress in this area, which could result in more safety recommendations, would improve NTSB’s impact on safety. NTSB’s reauthorization proposal seeks to make several changes to the agency’s accident investigation process that have the potential to expand the scope of the agency’s authority. For example, the proposal would expand the definition of accidents to include events that affect transportation safety, but do not involve destruction or damage. It is unclear if this new authority would expand NTSB’s workload, since “events” are not defined in the proposal, unlike “accidents” and “incidents,” which NTSB already investigates and are defined in regulation. In addition, NTSB has not explained the criteria for identifying events to investigate. Without explicit criteria, the agency cannot be assured it is making the most effective use of its resources. While NTSB has taken steps to increase the utilization of the training center and to decrease the center’s overall deficit, the classroom space remains significantly underutilized. The agency increased utilization of classroom space in the training center from 10 percent in fiscal year 2006 to 13 percent in fiscal year 2007. In addition, NTSB is finalizing a sublease agreement with the Department of Homeland Security to rent approximately one-third of the classroom space beginning July 1, 2008, which would help increase utilization of classroom space to 24 percent in fiscal year 2008. Further, in 2008, NTSB expects to deliver 14 core investigator courses at the training center. While we do not expect any classroom space ever to be 100 percent utilized, we believe a 60 percent utilization rate for training center classrooms would be reasonable, based on our knowledge of similar facilities. The agency’s actions to increase utilization also helped increase training center revenues from about $630,000 in fiscal year 2005 to about $820,000 in fiscal year 2007. By simultaneously reducing the center’s expenses—for example, by reducing the number of staff working at the center—NTSB reduced the training center’s annual deficit from about $3.9 million to about $2.3 million over the same time period. We believe these actions to increase utilization and their impact on the financial position of the training center are positive steps and provide some progress toward addressing our recommendations (see fig. 4). In addition, for fiscal year 2008, NTSB’s March 2008 business plan for the training center estimates that revenues will increase by about $570,000 to about $1.4 million and expenses will be $2.6 million, leaving a deficit of about $1.2 million. The increase in revenues is due primarily to subleasing all available office space at the training center to the Federal Air Marshals starting in September 2007 for $479,000 annually. According to agency officials, the projected deficit is no more than they would pay to provide training and store accident wreckage somewhere else, but as discussed in detail in appendix I, we do not believe that the plan provides enough information to support this conclusion. Going forward, however, the agency’s business plan for the training center lacks specific strategies to explain how further increases in utilization and revenue enhancement can be achieved. According to agency officials, they do not believe further decreases in the deficit are possible. However, without strategies to guide its efforts to market its classes and the unused classrooms, NTSB may be missing further opportunities to improve the cost-effectiveness of the center. Overall, NTSB has made progress in resolving or addressing weaknesses identified in an independent external audit of NTSB’s information security program, as required by the Federal Information Security Management Act of 2002 (FISMA). This evaluation, which was performed for fiscal year 2007 made eight recommendations to NTSB to improve compliance with FISMA, strengthen system access controls, and take steps to meet the requirements of the Privacy Act and related guidance by the Office of Management and Budget (OMB). Regarding FISMA compliance, NTSB made important progress by, among other things, hiring a contractor to perform security testing and evaluation of its general support system—an interconnected set of information resources, which supports the agency’s two major applications. Although the contractor identified 113 vulnerabilities which collectively place information at risk, NTSB has documented these vulnerabilities in a plan of action and milestones. NTSB officials stated that they have resolved many of the vulnerabilities and have actions under way to address the remaining vulnerabilities. Figure 5 shows NTSB’s progress specific to each of the recommendation made in the independent evaluation. In addition to the weaknesses addressed in these recommendations, our limited review of NTSB’s information security controls identified two new weaknesses regarding unencrypted laptop computers and excessive access privileges on users’ workstations. Federal policy requires agencies to encrypt, using only National Institute of Standards and Technology (NIST) certified cryptographic modules, all data on mobile computers/devices that contain agency data unless the data are determined not to be sensitive by the agency’s Deputy Secretary or his/her designate. However, according to NTSB officials, the agency has not encrypted data on 184 of 383 of its laptop computers. As a result, agency data on these laptops are at increased risk of unauthorized access and unauthorized disclosure. According to NTSB officials, the hardware on these laptops is not compatible with NTSB’s encryption product. To help mitigate the risk, NTSB officials stated that employees in the agency’s telework program use encrypted laptops and that non-encrypted laptops are to remain in the headquarters building. NTSB officials stated that they have ongoing efforts to identify and test compatible encryption software for these laptop computers. Until NTSB encrypts data on its laptops, agency data will remain at increased risk of unauthorized access and unauthorized disclosure. With regard to access, NTSB has inappropriately granted excessive access privileges to users. Users with local administrator privileges on their workstations have complete control over all local resources, including accounts and files, and have the ability to load software with known vulnerabilities, either unintentionally or intentionally, and to modify or reconfigure their computers in a manner that could negate network security policies as well as provide an attack vector into the internal network. Accordingly, industry best practices provide that membership in the local administrators’ groups should be limited to only those accounts that require this level of access. However, NTSB configures all users’ workstations with these privileges in order to allow investigators the ability to load specialized software needed to accomplish their mission. As a result, increased risk exists that these users could compromise NTSB’s computers and internal network. NTSB officials stated that they are planning to deploy standard desktop configurations, which they believe should address this vulnerability; however, the agency has not yet provided a timeframe when this will be completed. In the meantime, the agency asserts that it continuously monitors and scans workstations for vulnerabilities and centrally enforces the deployment and use of local firewall applications. Until NTSB takes action to remove or limit users’ ability to load software and modify configurations on their workstations, the agency is at increased risk that its computers and network may be compromised. We believe that by fully resolving the weaknesses described in the 2007 FISMA evaluation and addressing the newly identified weaknesses, NTSB can decrease risks related to the confidentiality, integrity, and availability of its information and information systems. While NTSB has made progress in improving its management processes and procedures, the full implementation of effective management practices are critical to NTSB being able to carry out its accident investigation mission and maintain its preeminent reputation in this area. Further, until NTSB protects agency data and limits users’ access to its systems, its information and information systems are at increased risk of unauthorized access and unauthorized disclosure. For continuing Congressional oversight, it is important that Congress have updated information on challenges that the agency faces in improving its management. While NTSB is required to submit an annual report on information security, there is no similar reporting requirement for the other management challenges. To assist NTSB in continuing to strengthen its overall management of the agency as well as information security, we are making three recommendations to the Chairman of the National Transportation Safety Board. To ensure that Congress is kept informed of progress in improving the management of the agency, we recommend that the Chairman (1) report on the status of GAO recommendations concerning management practices in the agency’s annual performance and accountability report or other congressionally approved reporting mechanism. We also recommend that the Chairman direct NTSB’s Chief Information Officer to (2) encrypt information/data on all laptops and mobile devices unless the data are determined to be non-sensitive by the agency’s deputy director or his/her designate and (3) remove user’s local administrative privileges from all workstations except administrators’ workstations, where applicable, and document any exceptions granted by the Chief Information Officer. We provided NTSB a draft of this statement to review. NTSB agreed with our recommendations and provided technical clarifications and corrections, which we incorporated as appropriate. To determine the extent to which NTSB has implemented the recommendations we issued in 2006, we reviewed NTSB’s strategic plan, IT strategic plan, draft human capital strategic plan, training center business plan, and office operating plans. To obtain additional information about these documents and other efforts to address our recommendations we interviewed NTSB’s Chief Information Officer, Chief Financial Officer, General Counsel, and other agency officials as well as representatives from NTSB’s employees union. To determine the extent to which NTSB has implemented other auditors’ recommendations related to information security, we reviewed work performed in support of the fiscal year 2007 FISMA independent evaluation, as well as FISMA independent evaluations performed by the Department of Transportation’s Office of Inspector General in 2005 and 2006. We obtained evidence concerning the qualifications and independence of the auditors who performed the 2007 FISMA review, and determined that the scope, quality, and timing of the audit work performed by this audit supported our audit objectives. In addition, we reviewed agency documents, and interviewed agency officials, including information security officials. We compared evaluations presented in audit documentation with applicable OMB and NIST guidance, and the Federal Information Security Management Act legislation. We also conducted a limited review of security controls on NTSB’s information systems. We considered NTSB to have made limited progress in implementing a recommendation when the agency was in the early planning stages and documents or milestones for actions did not exist or they did not follow leading practices. Recognizing that many recommendations may take considerable time and effort to fully implement, we considered NTSB to have made significant progress in implementing a recommendation if the agency had taken steps beyond the early planning stages toward addressing the concerns. In this case, documents or policies had been developed that, for the most part, followed leading practices. We considered NTSB to have fully implemented a recommendation when the agency had fully implemented plans or processes that followed leading practice. This work was conducted in accordance with generally accepted government auditing standards between October 2007 and April 2008. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For further information on this testimony, please contact Dr. Gerald Dillingham at (202) 512-2834 or by e-mail at dillinghamg@gao.gov or Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Individuals making key contributions to this testimony include Teresa Spisak, Assistant Director; Don Adams; Lauren Calhoun; Elizabeth Curda; Jay Cherlow; Peter Del Toro; William Doherty; Fred Evans; Colin Fallon; Nancy Glover; David Goldstein; Brandon Haller; Emily Hanawalt; Chris Hinnant; Dave Hooper; Hannah Laufe; Hal Lewis; Steven Lozano; Mary Marshall; Mark Ryan; Glenn Spiegel; Eugene Stevens; Kiki Theodoropoulos; Pamela Vines; Jack Warner; and Jenniffer Wilson. In 2006, we found that NTSB had taken positive steps to improve communications from senior management to staff, such as periodically sending e-mails to all staff to share information on new developments and policies. However, the agency lacked upward communications mechanisms—such as town hall meetings, regular staff meetings, and confidential employee surveys—which are central to forming effective partnerships within the organization. To improve agency communications, we recommended that NTSB develop mechanisms that will facilitate communication from staff level employees to senior management, including consideration of contracting out a confidential employee survey to obtain feedback on management initiatives. Our Assessment of NTSB’s Progress NTSB has fully implemented this recommendation. NTSB management officials have put in place processes to improve communication within the agency, and NTSB union officials told us that they believe that upward communication has improved as a result. For example, managers and Board members hold periodic meetings with staff, such as brown bag lunches; conduct outreach visits to regional offices; hold “town-hall” meetings in which NTSB employees ask questions of the managing director; and conduct meetings with union leadership to provide information on upcoming actions by the agency and to allow union leaders the opportunity to pose questions to management. In addition, the agency has formed two bodies comprising representatives from management and staff intended to enhance internal communication, including upward communication. One body is comprised of employees from NTSB’s administrative offices, and the other from NTSB’s program offices. In addition, NTSB has begun conducting several periodic surveys of employees, including (1) a survey to measure staff satisfaction with internal communications; (2) a survey to obtain employees’ views on the mission statement and goals that NTSB proposed for its revised strategic plan; (3) four separate surveys to measure employee satisfaction with services provided by NTSB’s administrative, human resources, and acquisition divisions and NTSB’s health and safety program; and (4) a biennial survey to obtain employee feedback on NTSB’s human resources efforts. This latter survey supplements—by being conducted during alternating years—the Office of Personnel Management’s biennial survey of federal employees that measures employees’ perceptions of the extent to which conditions characterizing successful organizations are present in their agencies. NTSB officials told us that because the communications survey indicated a need for NTSB’s individual offices to hold more frequent staff meetings, the agency has established a goal for fiscal year 2008 for each of its offices to achieve 75 percent of staff being either satisfied or very satisfied with their office staff meetings. In 2006, we found that NTSB’s strategic plan, issued in December 2005 for fiscal years 2006 through 2010, generally did not follow performance-based strategic planning requirements in the Government Performance and Results Act of 1993 (GPRA) and related guidance in the Office of Management and Budget’s Circular A-11. As required by GPRA, the plan had a mission statement, four general goals and related objectives, and mentioned key factors that could affect the agency’s ability to achieve those goals. However, the goals and objectives in the plan did not have sufficient specificity to know whether they had been achieved, and the plan lacked specific strategies for achieving those goals, including a description of the operational processes, skills and technology, and the resources required to meet the goals and objectives as mandated by GPRA. Without a more comprehensive strategic plan, NTSB could not align staffing, training, or other human resource management to its strategic goals or align its organizational structure and layers of management with the plan. To improve agency performance in the key functional management area of strategic planning, we recommended that NTSB develop a revised strategic plan that follows performance-based practices. Our Assessment of NTSB’s Progress NTSB has made significant progress in implementing this recommendation. NTSB issued a revised strategic plan in February 2007 for fiscal years 2007 through 2012. The revised plan more closely follows GPRA’s performance-based requirements than did the previous plan, but it still does not fully follow several important requirements. (See table 1.) The revised plan improves upon the previous plan by expressing most goals with sufficient specificity to enable a future assessment of whether they were achieved; including strategies for achieving 15 of 17 goals and objectives (NTSB describes strategies for achieving the other two goals in its annual operating plans), indicating that agency offices will establish annual performance goals designed to measure progress in achieving general goals of the revised plan; detailing the use of program evaluations to establish or revise goals and objectives; incorporating input that NTSB solicited from internal stakeholders (agency management and employees); indicating that agency offices will establish annual performance goals designed to measure progress in achieving general goals of the revised plan; detailing the use of program evaluations to establish or revise goals and objectives; incorporating input that NTSB solicited from internal stakeholders (agency management and employees). The revised plan does not fully follow two other GPRA requirements: The plan does not incorporate two of the five agency mission areas in its goals and objectives. NTSB officials told us that it chose to cover these two mission areas in the annual operating plans of the responsible offices because the areas are not the primary activity of the agency. Nevertheless, GPRA requires strategic plans to cover all mission areas. Although NTSB officials told us that the agency addressed concerns from Congress in its revised plan, the agency did not obtain comments on a draft of the plan from Congress. Nor did NTSB consult with other external stakeholders, such as the federal and state transportation agencies to which it addresses many of its recommendations. NTSB officials told us that they do not believe it would be appropriate to consult with these agencies, which sometimes prefer not to implement NTSB’s recommendations. Nevertheless, GPRA requires agencies, when developing a strategic plan, to “solicit and consider the views and suggestions of those entities potentially affected by or interested in the plan.” In 2006, we found that NTSB was minimally following leading information technology (IT) management practices. NTSB did not have a strategic plan for IT, and it had not developed an enterprise architecture for modernizing its IT systems. It also lacked an investment management process to control and evaluate the agency’s IT investment portfolio. NTSB did not have acquisition policies for IT, such as project planning, budgeting and scheduling, requirements management, and risk management. These shortcomings suggested that NTSB was not ensuring that its management of information technology was aligned to fully and effectively support its mission. To improve agency performance in IT management, we recommended that NTSB develop plans or policies for IT. The IT plan should include a strategy to guide IT acquisitions. Our Assessment of NTSB’s Progress NTSB has made progress in implementing this recommendation. In August 2007, NTSB issued an IT strategic plan that takes the following steps to address the concerns that led to the recommendation: It establishes goals and milestones for developing an enterprise architecture by 2012. (In November 2007, NTSB hired an enterprise architect to lead this effort.) It includes a draft investment management process. It establishes goals for implementing key aspects of the investment management process by 2008 and the full process by 2012. It establishes the goal of reaching Capability Maturity Model Integration level 2 (the level at which IT acquisitions and development can be said to be “managed” rather than “chaotic”) by 2012. To fully implement our recommendation, NTSB needs to improve one important aspect of its IT strategic plan. Although other GAO work and NTSB’s IT strategic plan stress the importance of aligning IT with agency strategic goals, the IT strategic plan is not well aligned with the agency’s strategic plan. Specifically, the IT plan does not address NTSB’s two top strategic priorities, namely (1) accomplishing objective investigations of transportation accidents to identify issues and actions that improve transportation safety and (2) increasing the agency’s impact on the safety of the transportation system. NTSB officials told us that the agency is improving its IT in ways that support these goals. For example, they said that efforts to develop a project tracking system and upgrade its investigation docket system support the first goal, and that the agency is redesigning its Web site and improving its Freedom of Information Act information system in support of the second goal. In 2006, we found that NTSB was minimally following leading knowledge management practices. NTSB did not have a knowledge management initiative or program and lacked a chief information officer to implement policies and procedures on information sharing. To improve agency performance in knowledge management, we recommended that NTSB develop plans or policies for knowledge management. Our Assessment of NTSB’s Progress NTSB has made significant progress in implementing this recommendation. NTSB has taken the following steps to improve its knowledge management: It has issued an agency strategic plan and an IT strategic plan as well as other plans and policies that include knowledge management activities. It has made the deputy managing director responsible for knowledge management activities within the agency. It has hired a chief information officer to implement policies and procedures on IT and information sharing. NTSB still needs to take the following steps to improve its knowledge management: It needs to revise its strategic plan and IT strategic plan to clearly identify which agency plans, activities, and goals pertain to management of agency knowledge. It needs to develop its strategic training plan, which NTSB officials told us will include a knowledge management component. Until NTSB develops this plan and revises the other two plans, its knowledge management activities pertaining to training will be unclear. In 2006, we found that NTSB developed a draft agencywide staffing plan in December 2005 that followed several leading practices in workforce planning but lacked other leading practices such as a workforce deployment strategy that considers the organizational structure and its balance of supervisory and nonsupervisory positions. In addition, while managers were involved in the workforce planning process, employees were not. Employee input provides greater assurance that new policies are accepted and implemented because employees have a stake in their development. To avoid excess organizational layers and to properly balance supervisory and nonsupervisory positions, we recommended that NTSB align its organizational structure to implement its strategic plan. In addition, we recommended that NTSB eliminate any unnecessary management layers. Our Assessment of NTSB’s Progress NTSB has fully implemented our recommendation to align its organizational structure to implement NTSB’s revised strategic plan. NTSB’s office operating plans describe how each office serves the NTSB’s mission as defined in its mission statement. Further, the plans align their offices’ respective performance objectives, and actions addressing such objectives, to strategic goals in NTSB’s revised strategic plan. NTSB has made significant progress in implementing our recommendation to eliminate unnecessary management layers. For example, to streamline the management structure in the Office of Aviation Safety, NTSB realigned the operations at 10 regional offices into four regions. This action simplified its reporting structure and made available a larger pool of accident investigators per region. NTSB union officials told us that the union has been involved in planning this consolidation. NTSB officials told us that the agency is not likely to consolidate any of its other modal offices because doing so would not allow the agency to eliminate supervisory positions since the supervisors in these offices spend a large portion of their time performing investigative duties. In 2006, we found that NTSB partially followed leading human capital practices in workforce planning; performance management; and recruiting, hiring, and retention and minimally followed leading practices in training and diversity management. In December 2005, NTSB developed a draft agencywide staffing plan that followed several leading practices but lacked a workforce deployment strategy that considered the agency’s organizational structure, its balance of supervisory and non-supervisory positions, and succession plans to anticipate upcoming employee retirement and workforce shifts. NTSB had issued performance plans for its senior managers and overall workforce. However, the goals in NTSB’s strategic plan were not sufficiently specific for staff to know whether their performance was contributing to meeting those goals. NTSB had implemented several flexibilities to assist with recruiting and retention; however, NTSB had neither a strategic recruitment and retention policy nor any succession plans. Further, NTSB did not follow the leading practices of integrating diversity management into its strategic plan and having a formal mentoring program and advisory groups to foster employee involvement in diversity management. To ensure that NTSB’s human capital management is aligned to fully and effectively support its mission, we recommended that the agency develop a strategic human capital plan that is linked to its overall strategic plan. The human capital plan should include strategies on staffing, recruitment and retention, training, and diversity management. Our Assessment of NTSB’s Progress NTSB has made significant progress in implementing this recommendation. In April 2008, NTSB provided us its draft human capital plan, which includes strategies for addressing eight human capital objectives included in NTSB’s revised strategic plan. However, these strategies do not always have clear linkages to the strategic plan. For example, the draft human capital plan objective and strategies for attracting well- qualified applicants to critical occupations clearly aligns with the revised strategic plan objective of maintaining a competent and effective investigative workforce. However, the draft human capital plan objective and strategies for monitoring execution of human capital strategic objectives does not align with the revised strategic plan objective of project planning; while the strategies lay out the provision of annual updates regarding the human capital plan, they do not specifically address the development of a project plan or its evaluation. The draft human capital plan incorporates several strategies on enhancing the recruitment process for critical occupations, and addresses succession management through several courses of action, such as implementing operations plans on executive leadership and management development. While the plan cites recruiting and retaining a diverse workforce, its strategies address recruitment but not other leading practices of diversity management that could contribute to retaining a diverse workforce, such as mentoring, employee involvement in diversity management, or succession planning. For example, one strategy involves the use of the NTSB diversity resource guide, which narrowly focuses on the recruitment of underrepresented groups, and does not address other leading practices of diversity management. Another strategy mentioned related to diversity involves the incorporation of diversity objectives into NTSB’s office operating plans, which also focus on recruitment. NTSB officials told us that the agency’s diversity management efforts focus on recruiting because NTSB needs to attract a more diverse workforce. The officials also told us that because the agency has a low attrition rate, it does not put as much emphasis on retention of a diverse workforce. We agree that it is important to attract a diverse workforce, however, a low attrition rate does not assure a work environment that retains and promotes a diverse workforce. In 2006, we found that NTSB was minimally following leading practices in training, which is a key area of human capital management. In particular, NTSB had neither developed a strategic training plan, nor had it identified the core competencies needed to support its mission and a curriculum to develop those competencies. Although NTSB staff annually identified what training they needed to improve their individual performance, as a result of not having a core curriculum that was linked to core competencies and the agency’s mission, NTSB lacked assurance that the courses taken by agency staff provided the necessary technical knowledge and skills. To improve agency performance in the key functional management areas of strategic and human capital planning, we recommended that NTSB develop a strategic training plan that is aligned with the revised strategic plan, identifies skill gaps that pose obstacles to meeting the agency’s strategic goals, and establishes curriculum that would eliminate those gaps. In addition, we recommended that NTSB develop core investigator curriculum for each mode. Our Assessment of NTSB’s Progress NTSB has made limited progress in implementing our first recommendation. NTSB officials told us that later in 2008, the agency intends to complete a strategic training plan that is linked to the agency’s strategic goals. To help develop the plan, NTSB plans to survey staff about their skill gaps and to develop a curriculum to eliminate those gaps. In fiscal year 2008, NTSB began requiring all staff to complete individual development plans aimed at improving their capabilities in support of organizational needs. NTSB also plans to use information gleaned from these plans in developing its strategic training plan. Once NTSB has completed the training plan and the curriculum, we will be able to assess the extent to which they address our recommendation. NTSB has also made limited progress in implementing our second recommendation. Although NTSB has developed a list of core competencies and associated courses for investigators, the agency has not described the knowledge, skills, and abilities for each competency. We have previously reported that well-designed training and development programs are linked to, among other things, the individual competencies staff need for the agency to perform effectively. Without such descriptions, NTSB does not have assurance that its core curriculum supports its mission. In addition, NTSB has not described the specialized competencies for its investigators in its various modes. However, the marine office plans to develop specialized core competencies and curriculum for its investigators in 2008, and NTSB’s other modal offices plan to do so at some later date after evaluating their investigators’ individual development plans. Because these curricula are important to help NTSB effectively meet its mission, we believe that NTSB’s senior managers and training managers should participate in the development and review of the curricula and the underlying competencies. To its credit, NTSB has taken or plans the following additional steps to improve its training: In April 2007, the agency hired a training officer, who is responsible for helping to identify training needs, developing related curriculum, and evaluating training courses. In fiscal year 2007, it began to encourage senior investigators to increase their participation in non-traditional training opportunities, such as spending time aboard oil tankers and in flight simulators to learn about marine and aviation operations, respectively. In fiscal year 2008, it began requiring all staff to complete at least 24 hours of training per year. In fiscal year 2008, it plans to evaluate the extent to which individual training courses resulted in desired changes in on-the-job behaviors for each of the 27 courses it plans to offer at the training center. In 2006, we found that NTSB had violated the Anti-Deficiency Act because it did not obtain budget authority for the net present value of the entire 20-year lease for its training center lease obligation at the time the lease agreement was signed in 2001. This violation occurred as a result of NTSB classifying the lease as an operating lease rather than a capital lease. NTSB realized the error in 2003 and reported its noncompliance to Congress and the President. NTSB had proposed in the President’s fiscal year 2007 budget to remedy this violation by inserting an amendment in its fiscal year 2007 appropriation that would allow NTSB to fund this obligation from its salaries and expense account through fiscal year 2020. However, this proposal was removed once the budget went to the House and Senate Appropriations Committees, leaving the violation uncorrected. In 2007, NTSB believed it had violated the Anti-Deficiency Act on a separate matter, namely the improper use of its appropriated funds to purchase accident insurance for its employees on official travel, and it asked GAO for an opinion on the matter. We determined that this was a violation because NTSB did not have an appropriation specifically available for such a purpose, and the payments could not be justified as a necessary expense. We recommended that NTSB should identify and implement actions to correct its violation of the Anti-Deficiency Act related to its lease of the training center. These actions could include obtaining a deficiency appropriation for the full costs of the lease, renegotiating or terminating the training center lease so that it complies with the Anti- Deficiency Act, or obtaining authority to obligate lease payments using annual funds over the term of the lease. We did not make a recommendation regarding NTSB’s other violation of the act because we reported that violation in a Comptroller General’s decision and such decisions do not include recommendations. Nevertheless, a Comptroller General’s decision that an agency has violated the Anti-Deficiency Act, in and of itself, suggests that the agency should correct the deficiency. Our Assessment of NTSB’s Progress NTSB has made significant progress in addressing its violation of the Anti-Deficiency Act related to lease payments of its training center. NTSB officials told us that because congressional appropriators do not want to appropriate funds for the remaining lease payments in a single appropriation law, NTSB worked with Congress to obtain authority to use its appropriations for fiscal years 2007 and 2008 to make its lease payments during those periods. To avoid future violations, NTSB will need to continue to work with Congress to obtain similar authority in its future annual appropriations. In addition, NTSB officials told us that the agency has asked Congress to ratify the lease payments it made from 2001 through 2006. NTSB has fully addressed its violation related to purchasing accident insurance for employees on official travel. In September 2007, NTSB reported the violation to Congress and the President, as required by the act. NTSB also successfully worked with Congress to remedy the violation through a fiscal year 2008 appropriation. NTSB cancelled the insurance policy, and NTSB officials told us that the agency has worked with Congress to obtain authority for future purchases of accident insurance. A bill to reauthorize the Federal Aviation Administration would provide NTSB with such authority. In 2006, we found that NTSB had made significant progress in improving its financial management by hiring a Chief Financial Officer and putting controls on its purchasing activities. As a result of actions taken by NTSB, the agency received an unqualified or “clean” opinion from independent auditors on its financial statements for the fiscal years ending September 30 for the years 2003, 2004, and 2005. The audit report concluded that NTSB’s financial statements presented fairly, in all material respects, the financial position, net cost, changes in net position, budgetary resources, and financing in conformity with generally accepted accounting principles for the three years. However, without a full cost accounting system capable of tracking hours that staff spent on individual investigations, in training, or at conferences, NTSB lacked sufficient information to plan the allocation of staff time or to effectively manage staff workloads. To improve agency performance in the key functional management area of financial management, we recommended that NTSB develop a full cost accounting system that would track the amount of time employees spend on each investigation and in training. Our Assessment of NTSB’s Progress NTSB has made limited progress in implementing this recommendation. Although NTSB routinely assigns a project code to many non payroll costs, its time and attendance system still does not allow the agency to routinely and reliably track the time that employees spend on each investigation or other activities, such as training. However, NTSB officials told us that the agency wants to add the ability to charge costs to projects (i.e., activities) and that it has discussed this addition with the provider of most of NTSB’s financial system needs—the Department of Interior’s (DOI) National Business Center. According to NTSB officials, this modification would enable direct recording by activity of hours worked and of corresponding payroll costs. NTSB officials also said that because the agency has not had sufficient funding to make this modification, it intends to request specific funding for this effort as part of its budget appropriation for fiscal year 2010. NTSB said that in the meantime, it will continue discussions with DOI and that it has begun to benchmark the planned modification to systems of agencies of comparable size. It anticipates that, once underway, DOI would work with NTSB to manage the implementation. In 2006, we found that for some transportation modes, NTSB had detailed, risk-based criteria for selecting which accidents to investigate, while for others it did not. For example, NTSB had criteria to select highway accidents for investigation based on the severity of the accident and amount of property damage. In contrast, NTSB did not have a documented policy with criteria for selecting rail, pipeline, and hazardous materials accidents. Instead, the decisions to investigate accidents were made by the office directors based on their judgment. As a result, for these modes, the agency lacked assurance and transparency that it was managing resources in a manner that ensured a maximum safety benefit. Such criteria were also important because NTSB did not have enough resources to investigate all accidents. To make the most effective use of its investigation resources and increase transparency, we recommended that NTSB develop orders for all transportation modes that articulate risk-based criteria for determining which accidents would provide the greatest safety benefit to investigate or, in the case of aviation accidents, explain which accidents are investigated at the scene, or remotely, in a limited manner. Our Assessment of NTSB’s Progress NTSB has made significant progress in implementing this recommendation. NTSB developed a transparent policy containing risk-based criteria for selecting which rail, pipeline, and hazardous materials accidents to investigate. This policy assigns priority to investigating accidents based on whether the accident involved a collision or derailment and whether it involved fatalities or injuries, among other factors. For marine accidents, NTSB has a memorandum of understanding with the U.S. Coast Guard that includes criteria for selecting which accidents to investigate. To enhance the memorandum of understanding, NTSB plans to consult with stakeholders and develop an internal policy on selecting marine accidents in 2008 once certain legal issues are resolved. In addition, NTSB has developed a transparent, risk-based policy explaining which aviation accidents are investigated at the scene, or remotely, in a limited manner, depending on whether they involve a fatality and the type of aircraft. In 2006, we found that NTSB’s process for changing the status of recommendations was paper-based and used sequential reviews, which slowed the process and prevented expedient delivery of information about recommendation status to affected agencies. We recommended that NTSB improve the efficiency of its process for changing the status of recommendations by computerizing the documentation and implementing concurrent reviews. Our Assessment of NTSB’s Progress NTSB has made significant progress in implementing this recommendation. NTSB recently completed a pilot program that involved electronic distribution of documents related to recommendation status. The results of that test are helping to guide development of an information system intended to help the agency manage its process for changing the status of recommendations. NTSB aims to fully implement the system by the end of fiscal year 2008. NTSB said that the system is being developed to support concurrent reviews. When fully implemented, this system should serve to close our recommendation. NTSB faced challenges to efficiently develop its reports; partly as a result, its investigations of major accidents routinely took longer than 2 years to complete. These challenges included multiple revisions of draft investigation reports at different levels in the organization, excessive workloads for writer/editors, and too few final layout and typesetting staff. NTSB had taken several actions aimed at shortening report development time, such as reemphasizing its policy on holding report development meetings to obtain early buy-in on report messages and holding modal directors accountable for specific issuance dates. We also identified practices in certain offices, such as the use of a project manager or deputy investigator-in-charge to handle report production, which had the potential to improve the efficiency of the agency’s report development process if used by all modal offices. To enhance the efficiency of its report development process, we recommended that NTSB identify better practices in the agency and apply them to all modes. NTSB should consider such things as using project managers or deputy investigators-in- charge in all modes, using incentives to encourage performance in report development, and examining the layers of review to find ways to streamline the process, such as eliminating some levels of review and using concurrent reviews as appropriate. Our Assessment of NTSB’s Progress NTSB has made significant progress in implementing this recommendation. NTSB examined and made several improvements to its report development process. For example, NTSB directed its office of safety recommendations and advocacy to provide comments on draft reports at the same time as other offices, instead of beforehand. NTSB estimates that this has reduced the time it takes to develop a report by 2 weeks. NTSB officials also told us that the agency established and filled a permanent position with a primary responsibility of quality assurance in the report development process. In addition, NTSB officials told us that the agency held a comprehensive training program in February 2008 for investigators in charge to learn about and share best practices across NTSB’s modal offices related to investigations and report development. NTSB also took or is taking the following steps to improve the efficiency with which Board members are able to review and approve draft reports: It is relying more on electronic rather than paper distribution of draft reports. It reduced the time allotted to Board members to concur or non-concur with staff responses to a Board member’s proposed revisions from up to 20 days to up to 10 days. It is developing an information system to manage the process, which it aims to fully implement by the end of fiscal year 2008. Aside from its highway office which was already doing so, NTSB’s modal offices decided not to use project managers or deputy investigators-in-charge to lead report development because the offices did not believe that doing so would appropriately address their report development issues; NTSB did not provide any further explanation of the basis for this decision. NTSB officials told us that its office of marine safety has improved the efficiency and effectiveness of its report development process by shifting responsibility for writing reports from three writer/editors to investigators-in-charge; the office’s one remaining writer/editor now focuses on editing. Finally, in December 2007, NTSB’s office of railroad, pipeline, and hazardous materials safety hired a deputy chief in the railroad division who will be responsible for streamlining the division’s report development process. In 2006, we found that in addition to its accident investigations, NTSB conducts studies on issues that may be relevant to more than one accident. These safety studies, which usually result in recommendations, are intended to improve transportation safety by effecting changes to policies, programs, and activities of agencies that regulate transportation safety. From 2000 to 2005, NTSB completed only four safety studies; NTSB officials told us that the number of safety studies it conducts is resource-driven. Industry stakeholders stated they would like NTSB to conduct more safety studies because the studies address NTSB’s mission in a proactive way, allowing for trend analysis and preventative actions. NTSB officials recognized the importance of safety studies, and they said that they would like to find ways to reduce the time and resources required to complete the studies. We concluded that NTSB’s limited use of safety studies to proactively examine and highlight safety issues may limit the effectiveness of its efforts to improve transportation safety. To be more proactive in identifying and correcting safety problems before accidents occur, we recommended that NTSB increase its utilization of safety studies. Our Assessment of NTSB’s Progress NTSB has made limited progress in implementing this recommendation. NTSB has not completed any safety studies since we made our recommendation and has only one study in progress. Although it has established a goal of developing and submitting to NTSB’s Board for approval two safety study proposals per year, it does not have a goal related to completing safety studies. NTSB officials told us that the agency still does not have enough staff to increase its output of safety studies on its own. NTSB told us that it has therefore begun to place more emphasis on a number of alternative products to safety studies which address important safety issues but are not as resource intensive. In addition, NTSB is examining the potential of using contractors to perform certain aspects of safety studies, such as data collection, and conducting some studies in collaboration with other entities, such as the National Aeronautics and Space Administration, the Federal Aviation Administration, a national laboratory, and foreign accident investigation organizations. In 2006, we found that the training center was underutilized, with less than 10 percent of the available classroom capacity being used during fiscal years 2005 and 2006. This contributed to the training center not being cost-effective, as the combination of the training center’s revenues and external training costs avoided by NTSB staff’s use of the facility did not cover the center’s costs. We recommended that NTSB maximize the delivery of core investigator curriculum at its training center. Our Assessment of NTSB’s Progress NTSB has made significant progress in implementing this recommendation by scheduling 14 core investigator courses at its training center in fiscal year 2008. In addition, NTSB started a new workforce development curriculum intended to address competencies not directly related to investigate activity, such as information security and written communications. NTSB officials told us that since it began this curriculum, the frequency and attendance of classes has increased significantly, but we could not verify this statement. In 2006, we found that NTSB’s training center was not cost-effective, as the combination of the training center’s revenues and external training costs avoided by NTSB staff’s use of the facility did not cover the center’s costs. As a result, those portions of the training center’s costs that were not covered by the revenues from tuition and other sources—approximately $6.3 million in fiscal year 2004 and $3.9 million in fiscal year 2005—were offset by general appropriations to the agency. While NTSB was generating revenues from other sources, such as renting training center space for conferences and securing contracts that allowed federal agencies to use training center space for continuity of operations in emergency situations, the training center was underutilized, with less than 10 percent of the available classroom capacity being used during fiscal years 2005 and 2006. NTSB lacked a comprehensive strategy for addressing this issue. We recommended that NTSB develop a business plan and a marketing plan to increase utilization of the training center or vacate its training center. NTSB should determine the costs and feasibility of alternative actions such as adding more courses for NTSB staff, moving headquarters staff to the center, subleasing space to other entities, or buying out the lease. Our Assessment of NTSB’s Progress NTSB has made significant progress in implementing this recommendation. For example, according to NTSB, it assessed the advantages and disadvantages of moving headquarters staff and functions to the training center but determined it was not cost effective. NTSB also told us that it determined that buying out the training center lease was not an available option. NTSB completed a draft business plan in March 2007 and a revised business plan in March 2008. We reviewed the 2007 draft plan and concluded that the overall strategy presented in the business plan to hire a vendor to manage and operate the training center was reasonable, but the plan provided too little rationale for its marketing and financial assumptions for us to assess the validity of this strategy. In July 2007, NTSB abandoned the strategy laid out in its business plan because it could not find a suitable vendor. While certain aspects of the revised business plan have been improved over the previous plan, overall, the revised plan lacks key financial and marketing information that is essential to a business plan. For example, NTSB’s revised business plan does not contain historical financial information or forecast financial information beyond fiscal year 2008 Further, the plan does not describe assumptions included in the plan, such as the inclusion of imputed fees for NTSB students in NTSB’s tuition revenues. In addition, although the revised business plan contains some goals, such as subleasing space to other federal entities and obtaining an additional continuity of operations agreement, the plan does not contain strategies for achieving these goals Further, while NTSB’s revised business plan indicates that the training center is cost-effective if cost savings—such as avoided costs of renting outside space for one regional office and storage of the reconstructed wreckage of TWA flight 800—are accounted for. However, the plan does not provide enough information to support this conclusion. While we believe that NTSB is justified in offsetting expenses that the agency would incur in the absence of the training center, the plan does not explain how NTSB estimated the values of these offsets. The plan does not include a rationale for assuming that NTSB would have to maintain all 30,000 square feet of warehouse space in the absence of the training center, or that space for both its regional aviation investigation office and the warehouse would cost NTSB $35 per square foot if rented elsewhere. In addition, it is not clear why certain items, such as the warehouse space rental, is included as an offset, while other items, such as savings for necessary accident investigation and family assistance training space needs, are not included as an offset. Finally, the plan lacks discussion of cost-saving alternatives, such as using space already available at NTSB headquarters for certain offset activities, such as select training courses. When asked about these shortfalls in the business plan, agency officials indicated that there was no flexibility in changing the configuration of the warehouse space, requiring the warehouse space to be considered an offset. In contrast, office and training space is included in the financial analysis due to its flexibility for expanded utilization. The agency did not comment on our other comments about the business plan. NTSB has taken steps to increase utilization of the training center and to decrease the center’s overall deficit, including the following: NTSB subleased all available office space at its training center to the Federal Air Marshal program at an annual amount of $479,000. NTSB increased utilization of the training center’s classroom space and the associated revenues from course fees and renting classroom and conference space. From fiscal year 2006 to fiscal year 2007, NTSB increased utilization of classroom space from 10 to 13 percent, and increased revenues by over $160,000. NTSB officials expressed concerns with our calculation of utilization rates because they assumed that holiday weeks and other scheduling difficulties were not considered in the calculation. However, our analysis excluded holidays and Christmas week from the calculation. NTSB is finalizing a sublease agreement with the Department of Homeland Security (DHS), which is expected to rent approximately one-third of the classroom space beginning July 1, 2008. We estimate that this would help increase utilization of classroom space in fiscal year 2008 to 24 percent. NTSB is undertaking efforts to increase utilization of the training center’s large area that houses wreckage used for instructional purposes, including the reconstructed wreckage of TWA flight 800, by seeking to acquire additional wreckage for instructional purposes. NTSB considered moving certain staff from headquarters to the training center, but halted these considerations upon subleasing all of the training center’s available office space. NTSB decreased personnel expenses related to the training center, from about $980,000 in fiscal year 2005 to $470,000 in fiscal year 2007 by reducing the center’s full-time equivalents from 8.5 to 3 over the same period. As a result of these efforts, from fiscal year 2005 to 2007, training center revenues increased 29 percent while the center’s overall deficit decreased by 41 percent. (Table 2 shows direct expenses and revenues for the training center in fiscal years 2004 through 2007.) In fiscal year 2007, training center revenues nearly covered the center’s operating expenses, not including lease costs. However, the salaries and other personnel-related expenses associated with NTSB investigators and managers teaching at the training center, which would be appropriate to include in training center costs, are not included. NTSB officials told us that they believe the investigators and managers teaching at the training center would be teaching at another location even if the training center did not exist. In 2006, we recommended that NTSB develop a full cost accounting system that would allow them to calculate these expenses. ($4,215) Deficit when space rental expense is excluded However, even at the 24-percent utilization rate for fiscal year 2008 that we estimate would result from the DHS sublease, the training center classroom space would still be underutilized. If NTSB does not finalize this agreement, we estimate that only 15 percent of classroom space would be utilized during the fiscal year. While we do not expect any classroom space ever to be 100 percent utilized, we believe a 60 percent utilization rate for training center classrooms would be reasonable, based on our knowledge of similar facilities. Without a functional business plan, NTSB lacks a comprehensive strategy to address these challenges. Compliance with the Federal Information Security Management Act (FISMA) What an Independent Auditor Found In June 2007, NTSB reported that its information security program was a prior year material weakness that had not yet been corrected. An independent FISMA evaluation completed in September 2007 assessed NTSB’s actions to address recommendations in prior year FISMA reports. The independent auditors reported that while NTSB continues to be in material non-compliance with FISMA, it had taken substantive corrective actions to address the material information security weaknesses identified in prior FISMA reports issued by the Department of Transportation, Office of Inspector General. Overall, the independent auditor reported that the corrective actions it observed, those underway or planned, if implemented timely and effectively, would further strengthen NTSB's information security program. The assessment completed in September 2007 found that NTSB met two requirements of FISMA: 1) having in place policies and procedures to reduce risks to an acceptable level and 2) ensuring that the agency has adequately trained its personnel in IT security practices. However, NTSB partially met or did not meet FISMA and NIST requirements in the following six areas: 1) providing periodic assessments of risk, 2) documenting policies and procedures based on risk assessments, 3) developing and maintaining an IT security program, 4) periodically testing security controls, 5) carrying out remedial actions, and 6) having in place plans and procedures for continuity of operations. What an Independent Auditor Recommended Assure that the Chief Information Officer monitors all key corrective actions and provides necessary funding and human resources to accomplish these actions so that no further delays occur. Our Assessment of NTSB’s Progress The agency has made progress in implementing this recommendation. For example, the Chief Information Officer has documented prior recommendations and newly identified vulnerabilities in a plan of action and milestones and is monitoring corrective actions to implement the recommendations and mitigate the vulnerabilities. Nevertheless, NTSB needs to take further actions to meet FISMA, OMB, and NIST guidance in the following four areas to help ensure an effective information security program: Risk assessments: Agencies are required to periodically assess the harm that could result if their information and information systems suffered unauthorized access, use, disclosure, disruption, modification, or destruction. NTSB has completed a risk assessment of its general support system in February 2008. The general support system is an interconnected set of information resources, and it supports the agency’s two major applications. In addition, a contract has been awarded to complete the risk assessments for the two major applications—the Accident Investigation System and the Lab Environment System, both of which the agency plans to complete by the end of September 2008. Until it assesses the risks associated with these two applications, NTSB cannot determine that the controls it has implemented for these two applications cost-effectively reduce risk to an acceptable level. Information security planning: To ensure effective security protection of information resources, agencies must develop plans describing how they will provide security for their systems, networks, and facilities. According to NIST, the security plan is to provide, among other things, an overview of the security requirements of the system and describe the controls that are in place or planned for meeting those requirements. NTSB has completed the security plan for the general support system, but development of security plans for its two major applications are not scheduled to be developed until after April 2008. Until these plans are completed, NTSB will not have in place a documented, structured process for adequate, cost-effective security protection for these systems. Periodic testing: Information security policies, procedures, practices, and controls should be tested periodically to ensure their effectiveness. These tests and evaluations should be conducted at least annually and include testing of the management, operational, and technical controls of every system identified in the systems inventory. In 2007, NTSB hired a contractor to perform a security test and evaluation of its general support system. The contractor identified 113 information security vulnerabilities, which collectively increased the risk of unauthorized disclosure and modification of agency information. NTSB has documented these vulnerabilities in a plan of action and milestones. According to NTSB officials, they have resolved many of the vulnerabilities, and are currently addressing the remaining ones. Because NTSB has not finished addressing the vulnerabilities identified in the security test and evaluation of its general support system, the agency cannot ensure that the controls it has in place are commensurate with an acceptable level of risk. Continuity of operations plan: To ensure that, in the event of an emergency, interim measures are available to restore critical systems, including arrangements for alternative processing facilities in case the usual facilities are significantly damaged or cannot be accessed, agencies must develop, document, and test contingency plans and procedures. Testing the continuity plan is essential to determining whether plans will function as intended in an emergency. A contingency plan for the general support system is under review by agency officials; and, according to these officials, this contingency plan also supports its two major applications and is part of the overall agency continuity of operations plan. However, the plan has not yet been approved or tested. Without an approved plan that has been tested, NTSB has limited assurance that it will be able to protect its information and information systems and resume operations promptly when unexpected events or unplanned interruptions occur. What an Independent Auditor Found The independent auditor identified several weaknesses in NTSB’s access controls. Specifically, NTSB did not promptly remove system access privileges for 28 individuals who had left the agency, was unable to provide documentation to support the original access granted to employees in most instances, did not have a process to determine the specific access authorities assigned to users for the general support system, had not performed the required annual review of users’ access authorities for the general support system, and did not implement a control to require the system to automatically disable inactive accounts after a period of non-use. The independent auditor noted that as a result of these weaknesses, the agency did not effectively implement the control processes required in its policies and in NIST guidance. What an Independent Auditor Recommended The independent auditor made five recommendations to improve access controls at NTSB. 1. Take immediate action to remove the access authorities from all NTSB systems for the 28 personnel who are no longer employed by or work for NTSB. Strengthen procedures for removing users’ access for interns, contractors, and executive training personnel who leave the agency. 2. Maintain documentation supporting the initial access granted to a user. 3. Develop a process to identify the specific systems, and within these systems, the specific access authorities granted to each general support system user, to enable user’s supervisors and system owners to properly analyze and complete the annual recertification of users’ access authorities. 4. Develop a more detailed operational procedure to guide system security officers and system owners in the process of recertification of users. This should include: (1) specific dates for the review, (2) requirements that documentation be retained to show the recertification by the users’ supervisors, and (3) actions that system security officers should take to remove or modify a user’s access to the system, based on the review. 5. Implement a control to automatically suspend an account after a period of nonuse, as required. Our Assessment of NTSB’s Progress The NTSB has taken important steps to improve the controls that safeguard access to its systems, but has not completed actions on all related recommendations. Specifically, NTSB removed the accounts of 28 personnel who left the agency. The agency has procured and in some cases begun to implement automated software tools to help implement recommendations related to granting, removing, and recertifying users’ access permissions. However, agency officials expect that these tools will be fully implemented in fiscal year 2008. Furthermore, NTSB has not yet completed identifying, for each system, the specific access permissions for each user and has not yet completed implementing a control to automatically suspend an account after a period of nonuse. What an Independent Auditor Found The independent auditor determined that NTSB did not comply with OMB requirements for implementing provisions of the Privacy Act. OMB Memorandum M-03-22 requires an agency to conduct privacy impact assessments for electronic information systems and collections and to make these assessments available to the public. The review found that NTSB had not issued sufficient written guidance in this area and had not conducted a privacy impact assessment of its information systems. In addition, the agency is required to report annually to OMB on compliance with sections 207 and 208 of the E-government Act. NTSB did not have available any guidance in this area, and had not issued the required annual reports. Furthermore, NTSB did not conduct an OMB-required review of its privacy policies and processes to ensure it has adequate controls to prevent the intentional or negligent misuse of or unauthorized access to personally identifiable information. What an Independent Auditor Recommended Assure actions are taken to meet the requirements of the Privacy Act and the requirements contained in related OMB memoranda and to update the plan of action and milestones to reflect the current status of NTSB actions in these areas. Our Assessment of NTSB’s Progress The agency has updated its plan of action and milestones to reflect the status of its corrective actions to implement the requirements of the Privacy Act. In addition, agency officials have recently taken action to develop a formal privacy program; however, work remains before it is fully compliant with the requirements of the Privacy Act. For example, NTSB completed privacy impact assessments on two of its public facing applications and stated that it plans to complete assessments for other applications and systems such as the accident investigation system. Furthermore, the agency is currently drafting a Systems of Records Notice, as required by OMB, which will, among other things, inform the public of the existence of records containing personal information and give individuals access to those records. The agency expects to have the Systems of Records Notice finalized in June 2008. Moreover, NTSB recently awarded a contract to a vendor to develop specific training to its employees on Privacy Act requirements. The agency expects this training to be available in June 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Transportation Safety Board (NTSB) plays a vital role in advancing transportation safety by investigating accidents, determining their causes, issuing safety recommendations, and conducting safety studies. To support its mission, NTSB's training center provides training to NTSB investigators and others. It is important that NTSB use its resources efficiently to carry out its mission. In 2006, GAO made recommendations to NTSB in most of these areas. In 2007, an independent auditor made information security recommendations. This testimony addresses NTSB's progress in following leading practices in selected management areas, increasing the efficiency of aspects of investigating accidents and conducting safety studies, increasing the utilization of its training center, and improving information security. This testimony is based on GAO's assessment of agency plans and procedures developed to address these recommendations. NTSB has made progress in following leading management practices in the eight areas in which GAO made prior recommendations. For example, the agency has improved communication from staff to management by conducting periodic employee surveys, which should help build more constructive relationships within NTSB. Similarly, the agency has made significant progress in improving strategic planning, human capital management, and IT management. It has issued new strategic plans in each area. Although the plans still leave room for improvement, they establish a solid foundation for NTSB to move forward. However, until the agency has developed a full cost accounting system and a strategic training plan, it will miss other opportunities to strengthen the management of the agency. NTSB has improved the efficiency of activities related to investigating accidents and tracking the status of recommendations. For example, it has developed transparent, risk-based criteria for selecting which rail, pipeline, hazardous materials, and aviation accidents to investigate at the scene. The completion of similar criteria for marine accidents will help provide assurance that NTSB is managing its resources in a manner to ensure a maximum safety benefit. Also, it is in the process of automating its lengthy, paper-based process for closing-out recommendations. Although NTSB has increased the utilization of its training center--from 10 percent in fiscal year 2006 to a projected 24 percent fiscal year 2008--the classroom space remains significantly underutilized. The increased utilization has helped increase revenues and reduce the center's overall deficit, which declined from about $3.9 million in fiscal year 2005 to about $2.3 million in fiscal year 2007. For fiscal year 2008, NTSB expects the deficit to decline further to about $1.2 million due, in part, to increased revenues from subleasing some classrooms starting July 2008. However the agency's business plan for the training center lacks specific strategies to achieve further increases in utilization and revenue. NTSB has made progress toward correcting previously reported information security weaknesses. For example, in an effort to implement an effective information security program, the agency's Chief Information Officer is monitoring corrective actions and has procured and, in some cases, begun to implement automated processes and tools to help strengthen its information security controls. While improvements have been made, work remains before the agency is fully compliant with federal policies, requirements, and standards pertaining to information security, access controls, and data privacy. In addition, GAO identified new weaknesses related to unencrypted laptops and excessive user access privileges. Agency officials attributed these weaknesses to incompatible encryption software and a mission need for certain users. Until the agency addresses these weaknesses, the confidentiality, integrity, and availability of NTSB's information and information systems continue to be at risk.
DCIA was enacted by the Congress, in part, to collect nontax debts delinquent more than 180 days by referring such debts to Treasury or a Treasury-approved debt collection center for cross-servicing. FMS is the only Treasury-approved governmentwide debt collection center. After receiving a debt from a referring federal agency, FMS generally keeps the debt for 30 days at its Debt Management Operations Center. During this time, FMS is to send a letter demanding payment to the debtor. An in-house FMS collector may attempt to contact the debtor to obtain payment in full or secure payment through other options, including compromise. If the debt has not been collected 20 days after the date of the demand letter, FMS is to report the debt to TOP if the referring agency has not already done so. If the referred debt remains uncollected after it has been at FMS for 30 days, FMS typically sends the debt to one of its five PCA contractors.The PCA contractor that receives the debt initially—the primary PCA contractor—is generally given 270 days from the date it receives the debt from FMS to collect or resolve the debt. If the primary PCA contractor is unable to collect or resolve the debt, it sends the debt back to FMS. FMS then typically sends the debt to another PCA contractor, the secondary PCA contractor for the debt. The secondary PCA contractor is also given 270 days from the date it receives the debt from FMS to collect or resolve the debt. FMS requires its PCA contractors to attempt to locate debtors, send demand letters, and attempt to obtain full payment before compromising any debt. FMS may refer debts to DOJ for litigation and enforced collection at any time. Debts that are returned uncollected to FMS from its secondary PCA contractors are to be either retained by FMS for additional collection action or returned to the referring agencies.According to the Federal Claims Collection Standards,federal agencies must terminate all collection action before closing out a delinquent nontax debt and must report certain closed-out debts to IRS. Federal agencies are required to report annually in their TRORs on the status of their nontax debts. TRORs are FMS’s only comprehensive means of collecting information on the federal government’s nontax debt portfolio, including debts written off, closed out, and reported to IRS. TRORs are also used to collect information on nontax debts delinquent more than 180 days to help FMS monitor federal agencies’ implementation of DCIA. FMS summarizes the information in the federal agencies’ TRORs annually in Report to the Congress on U.S. Government Receivables and Debt Collection Activities of Federal Agencies. OMB assists the President by developing governmentwide policies for the effective and efficient operation of the executive branch. As such, OMB establishes credit management policy for the federal government, including setting standards for extending credit, managing lenders participating in guaranteed loan programs, servicing nontax receivables, and collecting delinquent nontax debts. In addition, OMB is responsible for reviewing federal agencies’ policies and procedures related to credit programs and debt collection activities. To address our objectives, we interviewed FMS officials and reviewed pertinent FMS documents and reports to obtain an understanding of FMS’s policies and procedures for nontax debts that are returned uncollected to FMS by its PCA contractors and for closing out uncollectible nontax debts and reporting such debts to IRS as income to the debtor. We also reviewed applicable federal regulations and guidance for federal nontax debt collection, close-out, and IRS reporting, including the Federal Claims Collection Standards, OMB Circular A-129, and IRS instructions for reporting closed-out debts. In addition, we obtained and analyzed FMS’s cross-servicing database for the period from inception of the cross- servicing program in fiscal year 1996 through February 28, 2003, to determine what collection actions in-house FMS collectors performed on debts that had been returned uncollected from its PCA contractors and whether the in-house FMS collectors properly identified all uncollected debts that could be reported to IRS, including amounts that had been forgiven through compromise. A scope limitation prevented us from using statistical sampling techniques to determine whether compromises made by in-house FMS collectors were justified, supported, and reported to IRS. FMS’s cross-servicing database did not identify all forgiven amounts resulting from compromise agreements made by in-house FMS collectors; the database identified forgiven amounts only for in-house FMS agreements if the compromised amount had been paid in full and the debt settled. The database did not include the forgiven amounts for in-house compromise agreements that were active but had not yet been settled. We did use statistical sampling techniques to select from FMS’s PCA cross-servicing database 54 debts that had been compromised by FMS’s PCA contractors from October 1, 2002, through February 28, 2003.Using electronic and hard-copy information provided by FMS for the selected compromised debts, we determined whether the compromises were justified, supported, and reported to IRS. We projected the results from our sample of compromises to the population from which the sampled items were drawn. (App. I contains additional information on the sampling method.) In addition, we interviewed FMS and OMB officials about the extent to which their respective agencies monitor and report on federal agencies governmentwide regarding identification and reporting of closed-out debts to IRS. We also obtained and analyzed TRORs for all 24 CFO Act agencies to determine the nontax debt close-out and IRS reporting information for calendar year 2002. To determine whether information in FMS’s cross-servicing database was reliable, we reviewed documentation provided by FMS supporting reliability testing performed by FMS and its contractors on the database. In addition, we performed electronic testing of specific data elements in the database that we used to perform our work. Based on our review of FMS’s documents and our own testing, we concluded that the data elements used for this report are sufficiently reliable for the purpose of the report. We performed our work from October 2002 through August 2003 in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of the Treasury and the Director of OMB or their designees. The Commissioner of FMS provided Treasury’s comments, which are reprinted in appendix II. On October 21, 2003, staff from OMB provided us with OMB’s oral comments on the draft. Treasury’s and OMB’s comments are discussed in the Agency Comments and Our Evaluation section of this report and are incorporated in the report as applicable. As of February 28, 2003, FMS had approximately $6.6 billion of debts in cross-servicing. More than half of these debts had been returned uncollected by FMS’s secondary PCA contractors and were being kept in TOP for passive collection. Passive collection entailed no further collection action other than minimal efforts through offsets, and certain debts in passive collection were not eligible for such offsets. In addition, FMS did not review certain uncollected debts that FMS returned to the referring agencies to determine whether all collection activity had been performed on the debts, including whether FMS should close out and report the debts to IRS on behalf of the agencies. Further, certain debts that were not in passive collection or returned to referring agencies were kept in inactive status where no collection activities, including referral to TOP, were performed. Consequently, opportunities for maximizing collections or other recoveries were lost. When debts were returned from secondary PCA contractors, FMS simply kept most of them in TOP, where they largely lay dormant without any review to determine the next best course of action to improve collections. For fiscal years 2000, 2001, and 2002, FMS kept about $2.6 billion of uncollected nontax debts returned from its secondary PCA contractors in TOP for passive collection. As of February 28, 2003, debts retained in TOP for passive collection totaled about $3.7 billion and, as shown in figure 1, represented 56 percent of the approximately $6.6 billion of debts that were at FMS for cross-servicing at that time. Through February 28, 2003, FMS had collected only about $9 million on debts in passive collection through offsets, which was the only collection tool being used to collect these returned debts. FMS did not have written procedures for reviewing debts kept in TOP for passive collection. It is important to note that FMS officials stated that because of system limitations, FMS did not identify specific debts that were in TOP for passive collection. However, we were able to identify debts in TOP for passive collection using off-the-shelf database analysis software. Certain nontax debts kept in TOP for passive collection warrant additional review to determine the next best course of action to maximize collections or other recoveries, such as those possible through administrative wage garnishment (AWG) or reporting closed-out debts to IRS. For example, DCIA authorized federal agencies to use AWG to collect delinquent nontax debts. FMS can perform AWG on behalf of other federal agencies as part of cross-servicing, although only on behalf of agencies that have authorized FMS to do so. FMS began using AWG with the assistance of its PCA contractors during fiscal year 2002. Because most of the debts in TOP for passive collection were returned to FMS from PCA contractors before any agencies had authorized FMS to use AWG on their behalf, most debts in TOP for passive collection have not yet been assessed for AWG collection opportunities. Further, as of our fieldwork completion date, only four federal agencies had authorized FMS to perform AWG on their behalf.However, FMS expects additional agencies to provide such authorization in the future. In addition, about $449 million of nontax debts in TOP for passive collection as of February 28, 2003, will not be collected through offset because the statutory and regulatory 10-year limitations for offsets has expired for those debts. According to FMS officials, FMS’s cross- servicing system did not remove debts from TOP when the debts reached the 10-year limitation, so such debts were not evaluated for possible close- out and reporting to IRS. Certain other debts in TOP for passive collection are also unlikely to yield any collections through offsets—those for which we determined the debtors’ Taxpayer Identification Numbers (TIN) were invalid or belonged to deceased individuals or cases in which the debtors were bankrupt. Specifically, we identified about $24 million of delinquent nontax debts for which the debtors’ TINs were invalid. In addition, using the Social Security Administration’s (SSA) Death Master File, we identified over 2,500 nontax debts totaling about $18 million with TINs that belonged to reportedly deceased debtors, including one with a referred balance of approximately $4 million. This debt had been in TOP since November 2001 with no collections through offsets. We also identified 69 delinquent Medicare debts belonging to the Department of Health and Human Services (HHS) totaling about $12 million that were being held in TOP after return from secondary PCA contractors for which FMS’s cross-servicing database indicated that the debtors were in bankruptcy. According to FMS officials, when a bankruptcy is recorded in the cross-servicing database for a particular debt, the cross-servicing system marks the debtor as bankrupt for all debts associated with that debtor but does not remove them from TOP. In-house FMS collectors typically removed from TOP only the specific debt that they were working even though others had been flagged as belonging to the same bankrupt debtor. As a result of our analyses and inquiries, FMS has initiated a review of debts in TOP to identify those beyond the statutory and regulatory 10-year limitations for offsets. As of April 2003, FMS had identified over 7,300 such debts, totaling about $463 million (an increase of $14 million over the $449 million of debts we identified as of February 28, 2003). An FMS official stated that these debts would be removed from TOP and evaluated for possible close-out and reporting to IRS as income to the debtors. The official also stated that FMS would develop a process for routinely identifying such debts. In addition, FMS officials stated that FMS will revise its policies and procedures so that collectors will be instructed to review the debtor and all associated nontax debts whenever a bankruptcy is discovered for a debt and determine debts that should be removed from TOP. Finally, FMS officials stated that FMS is in the process of developing a new automated cross-servicing system, called FedDebt. According to FMS officials, once FedDebt is implemented in January 2005, FMS will be able to routinely identify individual debts that are in passive collection. Through February 28, 2003, FMS returned to referring agencies about $1.1 billion of delinquent nontax debts that had been returned uncollected to FMS by secondary PCA contractors during fiscal years 2000, 2001, and 2002. FMS’s cross-servicing procedures require that in-house FMS collectors, prior to returning debts to referring agencies, review the collection activity on the debts to determine whether they are eligible to be returned to the referring federal agencies. As part of this review, the cross- servicing procedures require collectors to determine whether the debts should be closed out and reported to IRS by FMS. We found, however, that FMS had summarily returned about 40 percent of the $1.1 billion to referring agencies without first ensuring that the required collection activities had been performed. According to information in FMS’s cross-servicing database, in April 2002 FMS had a substantial backlog of debts that had been returned to FMS by secondary PCA contractors over the past several years that were primarily in inactive status, meaning that no collection action was taking place. To eliminate this backlog, FMS used its automated system to summarily return about 41,000 debts totaling approximately $446 million to the referring agencies in April 2002. According to agency procedures and as confirmed by an FMS official, prior to the April 2002 return of the debts to the referring agencies, FMS should have first evaluated these debts to determine whether close-out was appropriate and whether the debts should be reported to IRS. Our analysis showed that about $97 million of these returned debts met two criteria for being reported by FMS to IRS as income to the debtor: (1) the debts had TINs and (2) the referring agencies had granted FMS authority to report the debts to IRS. Our review of the cross-servicing database showed that FMS continues to face challenges in reviewing uncollected debts returned from secondary PCA contractors. Specifically, as of February 28, 2003, FMS had approximately $80 million of debts in inactive status even though its PCA contractors returned these uncollected debts to FMS during fiscal year 2002. According to an FMS official, the backlog occurred because the automated cross-servicing system did not always identify debts returned to FMS by secondary PCA contractors that required further collection review by in-house FMS collectors. The FMS official stated that FedDebt, when implemented in January 2005, would correct this problem. DCIA gives OMB responsibility for annual reporting to the Congress on any problems regarding federal agency progress in improving policies and standards for closing out debts, and FMS is responsible for the form and content of the TROR, which FMS uses to monitor federal agencies’ implementation of DCIA. Neither OMB nor FMS monitored or reported on the extent to which agencies governmentwide closed out debts and reported them to IRS. The TRORs for 24 CFO Act agencies showed that the agencies reported that about $1 billion of the approximately $3.2 billion of nontax debts that were reported closed out by those agencies were reported to IRS as income to the debtors for calendar year 2002.Additionally, the TRORs that the agencies used to report did not disclose why closed-out debts were not reported to IRS and did not include closed- out debts that had been previously classified as currently not collectible (CNC). These are significant reporting deficiencies because without such information, the TRORs cannot be used to determine the extent to which all eligible debts are closed out and reported to IRS. As a result of inadequate monitoring and reporting of closed-out debts to IRS, opportunities for recovery by reporting closed-out debts to IRS as income to debtors may have been lost. Neither OMB nor FMS officials could specifically explain why certain agencies had reported different amounts for debts closed out and debts reported to IRS. According to an OMB official, OMB does not have a formal process in place to review federal agencies’ standards and policies regarding debt collection, including reporting closed-out debts to IRS, and does not monitor the extent to which agencies close out debts and report them to IRS. The OMB official stated that OMB examiners, at their own discretion, might look at how federal agencies are closing out debts as part of the examiners’ overall evaluation of the agencies’ implementation of the President’s Management Agenda.According to the official, OMB has not submitted any reports to the Congress regarding problems with agencies’ standards and policies for closing out debts and reporting them to IRS. FMS officials stated that the large difference on the agencies’ TRORs between closed-out debts and debts reported to IRS may be attributable to situations involving debts that are not required to be reported to IRS.However, FMS does not require federal agencies to disclose such information in their TRORs. Without such disclosures in the TRORs, it is not possible for FMS, OMB, or any other interested party to determine whether federal agencies are reporting their closed-out debts to IRS accurately and completely. Moreover, the agency TRORs understated the amount of debt closed out during calendar year 2002. Specifically, we determined and FMS officials acknowledged that the $3.2 billion of debts that were reported closed out by the 24 CFO Act agencies did not include debts previously classified as CNC that were subsequently closed out. This is a significant deficiency in the TROR because CNC debts that are eventually closed out can be substantial. For example, the 24 CFO Act agencies reported about $10.1 billion of CNC debts at the end of calendar year 2002. Without information on whether CNC debts are closed out, the TRORs cannot be used to fully determine the extent to which all debts are closed out and reported to IRS. In spite of these reporting deficiencies, FMS officials stated that FMS does not have any plans to revise the TROR. In addition to taking little action to improve collections for debts that were returned uncollected by PCA contractors, FMS missed certain opportunities to improve overall cross-servicing collections. FMS did not establish effective processes or procedures for identifying debts to forward to DOJ. As a result, FMS had relatively few debts (about $30 million as of February 28, 2003) at DOJ for enforced collection action even though DOJ has been successful in collecting debts through civil litigation in the past. In addition, FMS did not report all eligible debts that had been referred for cross-servicing to TOP, as required by its cross-servicing procedures, and did not report secondary debtors, such as cosigners, to TOP. DOJ serves as the federal government’s “collector of last resort.” When a federal agency, including FMS, cannot collect certain debts administratively, DOJ can litigate the claims and, with judicial oversight, enforce collections by seizing bank, stock, and similar accounts from debtors; seizing and selling debtor-owned real estate and other property; and garnishing a higher percentage of debtors’ wages than AWG under DCIA allows. The benefits of enforced collection are reflected in past DOJ recoveries. In its fiscal year 2002 report to the Congress, FMS noted that DOJ collected about $10.9 billion in cash recoveries through civil litigation from fiscal year 1998 through fiscal year 2002. The Federal Claims Collection Standards require federal agencies to promptly refer debts that have a principal balance of at least $2,500 to DOJ when the debts cannot be collected through either compromise or aggressive collection action and do not meet criteria for suspending or terminating collection action. Accordingly, OMB Circular A-129 requires federal agencies, including FMS as the federal government’s central debt collection agency, to refer delinquent debts to DOJ as soon as there is sufficient reason to conclude that full or partial recovery of the debts can best be achieved through litigation. FMS acknowledges that DOJ referrals are an important part of cross- servicing. In its annual report to the Congress on federal agencies’ debt collection activities, FMS reported that referrals to DOJ for civil litigation governmentwide decreased significantly over the last 3 fiscal years, from 50,572 debts in fiscal year 2000 to 8,443 debts in fiscal year 2002. As federal agencies continue to implement DCIA and make progress in promptly referring eligible debts that are over 180 days delinquent to FMS for collection action in accordance with the act’s requirements, reported decreases in federal agency referrals to DOJ for enforced collection can be expected as would increases in FMS referrals due to the shift in collection responsibilities from the agencies to FMS. Generally, a determination that a debt should be referred to DOJ cannot reasonably be made until appropriate cross-servicing collection action has taken place. In working with federal agencies to facilitate implementation of DCIA, FMS emphasizes that referral of a debt to DOJ for enforced collection is a key cross-servicing tool. FMS makes clear to agencies that it will (1) prepare the forms necessary for referring debts to DOJ,(2) work with DOJ to obtain necessary information from the agencies to litigate the claims, (3) monitor the debts while they are at DOJ, and (4) apply DOJ collections to the debts. FMS, based on consultations with DOJ, established the following conditions for its referral of agency debts to DOJ: (1) the federal creditor agency has authorized FMS to refer its debts to DOJ, (2) the principal amount of the debt is $25,000 or more, (3) there is at least 1 year before the statute of limitations expires, (4) FMS has a debtor address (or other debtor contact information for service-of-process purposes), and (5) FMS has evidence that the debtor has assets or a source of income. As appropriate, FMS also expects to refer debts to DOJ when some, but not all, of the criteria are met. For example, FMS might refer debts less than $25,000 when bank accounts have been identified. In spite of FMS’s key role in determining whether debts referred for cross- servicing should be referred to DOJ for enforced collection, only a nominal amount of cross-serviced debt was at DOJ. Specifically, as of February 28, 2003, only about $30 million of the approximately $6.6 billion of debts with FMS for cross-servicing were at DOJ. Moreover, as shown in figure 2, all but about $4 million of the debts FMS had referred to DOJ were referred prior to fiscal year 2000, suggesting that FMS had not emphasized adjudication as a collection tool. According to an FMS official, prior to fiscal year 2002, FMS had no specific process to evaluate cross-serviced debts to determine whether recovery could best be achieved by DOJ. Rather, the FMS official stated, FMS relied on the referring agencies to identify delinquent debts to refer to DOJ. In addition, FMS’s in-house collectors, using their own discretion during the normal course of their collection activities, could identify specific debts for referral to DOJ. In fiscal year 2002, FMS, in an effort to increase referrals to DOJ, began performing quarterly queries of its cross-servicing database to identify uncollected debts for referral to DOJ. The queries, while conceptually good, did not cover most of FMS’s cross-servicing portfolio. Rather, they were limited to debts with principal balances $25,000 or over that were classified as inactive or “special handling.” As of February 28, 2003, FMS had identified nine debts totaling about $4 million for DOJ referral using this smaller segment of its cross-servicing database. Reviewing only debts classified as inactive or “special handling” with principal balances over $25,000 is unlikely to result in many candidates for FMS referral to DOJ because of the nature of these debts and the amounts covered. Specifically, for many of the debts in inactive status, FMS does not have TINs, which are required for DOJ referral, or the debtors are in bankruptcy.Debts classified as “special handling” are debts that collectors have identified as needing special processing because they want to keep the cases at the debt collection center. For example, a collector may place a debt in “special handling” if the collector is in negotiations with the debtor over a payment plan. We applied FMS’s database query method to debts classified as inactive and “special handling.” Our query identified about $198 million of uncollected debts, which represented about 3 percent of the amount in cross-servicing. We determined that the majority of these debts were not good candidates for DOJ referral. Specifically, about $106 million of such debts either (1) lacked agency authorization for referral to DOJ, (2) were involved in bankruptcy proceedings, (3) were beyond the general 6-year statute of limitations for litigation of nonjudgment debts, or (4) lacked TINs. We would consider it reasonable for FMS to query a larger segment of its cross-servicing database. In particular, debts held in TOP for passive collection would seem to be better candidates for DOJ referral because they should have valid TINs and are not supposed to be in bankruptcy. This segment of the cross-servicing debt portfolio is rather large. We determined that FMS had approximately $2.2 billion of debts in TOP with principal balances of at least $2,500 that had been returned from its secondary PCA contractors and that were within the 6-year statute of limitations for litigating nonjudgment debt. Unless FMS starts expanding the scope of its reviews for potential referrals to DOJ, the statute of limitations for these debts will likely expire without any opportunity for enforced collection action. Our assessment of FMS’s database as of February 28, 2003, showed that about $449 million of debts with principal balances of at least $2,500 likely had their statute of limitations expire while they were held in TOP for passive collection. We determined that all of these debts would have been possible candidates for referral to DOJ, since they had been returned from FMS’s secondary PCA contractors with at least 1 year remaining before the statute of limitations expired. FMS also did not routinely consider or act on advice from its PCA contractors regarding referrals to DOJ. Because PCA contractors’ responsibilities include locating debtors and determining whether they have incomes or assets to repay delinquent debts, the PCA contractors would have a reasonable basis for identifying uncooperative debtors who could repay their debts but had refused. FMS’s PCA Operations and Procedures Manual requires FMS’s PCA contractors to provide recommendations to FMS on the next collection actions that should be taken on individual debts, such as referral to DOJ for litigation.According to the manual, litigation should be recommended when the PCA contractor believes that the debtor has sufficient assets for debt repayment and that no less costly method of collection would be effective. Our analysis showed that FMS was holding debts totaling about $47 million in TOP for passive collection that had principal balances over $2,500 for which PCA contractors had recommended litigation. We noted that FMS’s cross-servicing database showed that these debts were within the general 6-year statute of limitations for litigating nonjudgment debts and had no apparent barriers to litigation, such as debtor bankruptcy or a deceased debtor. FMS officials stated that FMS does not routinely review recommendations made by its PCA contractors because FMS does not believe such recommendations are reliable. In this regard, we noted that FMS’s PCA Operations and Procedures Manual does not set forth the specific FMS criteria for selecting debts for DOJ referral. In addition, FMS does not tell PCA contractors which creditor agencies have authorized FMS to refer debts to DOJ on the agency’s behalf. It is important to note that only about $3 million, or less than one-tenth of 1 percent, of the approximately $3.9 billion of uncollected debts that were returned to FMS from its secondary PCA contractors during fiscal years 2000, 2001, and 2002 were at DOJ. Moreover, while FMS had referred only limited amounts of cross-serviced debt to DOJ for litigation, FMS lacked a history of its prior referral activity and knowledge of the results of such referrals. FMS officials stated that FMS does not use the cross-servicing database to track DOJ referrals; however, we found that the database has status and collection activity codes capable of being used for such tracking. FMS officials acknowledged the need to track all DOJ referrals and stated that FMS will ensure that FedDebt will be able to track all debts that FMS has referred to DOJ. FMS Did Not Fully Use TOP FMS’s policies and procedures require in-house FMS collectors to report all eligible debts to TOP early in the cross-servicing process, before sending them to FMS’s PCA contractors. In fiscal year 2000, we reported that FMS did not promptly report eligible debts to TOP as its procedures required. Computer interface problems and errors by in-house FMS collectors were cited as reasons for not promptly reporting all eligible debts to TOP.Problems regarding TOP referrals continue as FMS’s cross-servicing database as of February 28, 2003, showed that about 1,800 debts that were eligible for TOP, with referred balances totaling about $356 million, were at PCA contractors but had never been put into TOP by FMS’s collectors. We did not identify any apparent factors that would have precluded FMS’s collectors from reporting these debts to TOP. The database showed that the debts were eligible for TOP in that the referring agencies had authorized FMS to report the debts to TOP, the debtors had TINs, the debtors were not in bankruptcy or deceased, and the debts were not over 10 years delinquent. The delays in reporting these debts to TOP were extensive. As of February 28, 2003, about $215 million of these debts with an average of approximately 320 days in cross-servicing were at the secondary PCA contractor without having been sent to TOP. One of the more egregious delays involved a debt referred by an agency in October 2001 for about $43 million that had been in cross-servicing for over 500 days without ever having been reported to TOP. FMS officials stated that they are aware that eligible debts are not always being reported to TOP. They told us that debts might not be reported to TOP because the cross-servicing automated system does not always identify debts that should be reported. For example, FMS officials stated that if the system failed during its nightly batch processing, the debts that would otherwise have been flagged for reporting to TOP would be missed. FMS officials stated that the cross-servicing system could not go back and routinely identify debts that were missed. Thus, as acknowledged by FMS officials, FMS would have to perform a periodic sweep of the entire database to identify eligible debts that were missed for reporting to TOP. In response to our inquiries, FMS officials stated that FMS will take action to ensure that FedDebt includes features to correct this problem when it is implemented in January 2005. FMS is also not seizing the opportunity to report secondary debtors to TOP. Our analysis of FMS’s cross-servicing database as of February 28, 2003, showed that about $144 million of the approximately $5 billion of cross- serviced debts in TOP had secondary debtors with TINs. According to FMS officials, both the TOP and cross-servicing automated systems are debt- based, rather than based on both debt and debtor. As such, TOP cannot be used to identify all debtors associated with a debt, a problem we noted to FMS about 5 years ago. Even if TOP would accept these data, the cross- servicing system cannot provide them, since it is now capable of sending only one debtor per debt to TOP. FMS officials stated that FMS is in the process of enhancing TOP to accept multiple debtors for a single debt and that the TOP enhancement should be implemented in fiscal year 2004. The officials also stated that FMS will ensure that FedDebt will be capable of referring multiple debtors to TOP when it is implemented in January 2005. FMS did not sufficiently ensure that nontax debts that were forgiven through compromises with debtors by its in-house collectors or its PCA contractors were done so in an operationally sound manner. FMS’s cross- servicing database as of February 28, 2003, showed that FMS and its PCA contractors forgave a total of at least $51 million of delinquent nontax debts through compromises with debtors during fiscal years 2000, 2001, and 2002. For FMS in-house compromises, this included only those compromise agreements that had been settled and paid in full. The cross- servicing database did not identify forgiven amounts for agreements that were still active or defaulted. In addition, it is unclear whether certain forgiven amounts should have been forgiven or by how much, since FMS’s PCA contractors often did not document why they compromised debts and often did not obtain sufficient support and justification for the compromises. Further, FMS overstated federal agencies’ progress in referring eligible nontax debts for cross-servicing. Specifically, FMS incorrectly reported that agencies had referred 96 percent of their eligible debts for cross-servicing for fiscal year 2002, rather than the actual rate of 79 percent based on our analysis of information provided by FMS. This discrepancy occurred because FMS did not include any debts that were reported as having become eligible for referral for cross-servicing during fiscal year 2002 and did not deduct the amounts of certain debts that it returned to referring agencies during fiscal year 2002. The soundness of FMS’s cross-servicing program can be undermined if certain debtors receive more generous treatment as a result of compromise agreements than other similarly situated debtors. While the amount of debt forgiven as noted above was not substantial, the consistency with which delinquent debts are forgiven and the extent to which federal requirements are adhered to in arriving at such decisions are vital. Therefore, it is critically important for FMS to (1) accurately track debt amounts forgiven, (2) obtain documented support for the compromise agreements, and (3) obtain TINs for the debtors. In August 2000, as part of our overall report on FMS’s cross-servicing program, we reported that the majority of FMS compromise agreements we reviewed, including those made by PCA contractors, did not include support for the forgiven amounts. In following up on FMS’s compromise activity, we found that FMS’s cross- servicing system did not track the forgiven amounts for all debts that had been compromised during fiscal years 2000, 2001, and 2002. In addition, FMS’s PCA contractors often did not document why they compromised debts and often did not obtain sufficient support for the compromise agreements, including debtors’ TINs, which are needed to report the forgiven amounts to IRS. The Federal Claims Collection Standards state that federal agencies may compromise debts if (1) the debtor is unable to pay the full amount in a reasonable time, as verified through credit reports or other financial information; (2) collection in full cannot be achieved within a reasonable time by enforced collection proceedings; (3) the cost of collection does not justify the enforced collection of the full amount; or (4) there is significant doubt concerning the government’s ability to prove its case in court. According to the standards, in determining the debtor’s inability to pay, agencies should consider a number of factors as verified by the debtor’s credit report and other financial information, including financial statements that show the debtor’s assets, liabilities, income, and expenses. In addition, FMS’s PCA contract requires its PCA contractors to document their attempts to collect the full amount of delinquent debts and provide justification for compromises. In the absence of adequate documentation supporting the PCA contractor’s determination to compromise a debt for a specific amount, FMS cannot determine whether the compromise is reasonable under the Federal Claims Collection Standards. Thus, FMS has no basis to determine whether the government suffered a loss that should not have been incurred as a result of such a compromise. We also determined that the PCA contract does not establish liquidated damages or penalties for a PCA contractor’s failure to document a compromise. As part of our review, we attempted to obtain the forgiven amount for each compromise agreement established by in-house FMS collectors during fiscal years 2000, 2001, and 2002, to determine whether the bases for the forgiven amounts had been supported and documented by FMS’s in-house collectors. However, FMS could not provide us the forgiven amount for each compromise agreement because the cross-servicing system only identifies the forgiven amount for compromise agreements that have been settled in full. Thus, FMS could not provide us the forgiven amounts for compromise agreements that were active or in default. Absent information on forgiven amounts for all compromise agreements, FMS cannot track the extent to which its collectors are compromising agency-referred debts and the bases for the compromises. According to an FMS official, FMS acknowledges that such information is critical to sound cross-servicing operations and, as a result of our inquiries, plans to incorporate the ability to identify and track all forgiven amounts in the FedDebt system. According to FMS officials, in fiscal year 2002, FMS began to review repayment and compromise agreements made by its PCA contractors as part of its annual PCA contractor compliance reviews. During these reviews, FMS generally found that all PCA contractors failed to consistently document in their respective debt collection systems the justification for accepting installment payments and compromise agreements. As a result of FMS’s findings, each PCA contractor agreed to conduct training sessions for its collectors or take other corrective actions to help ensure that the collectors properly obtain and document support for forgiven amounts. In spite of FMS’s reviews of the compromise activity of its PCA contractors and related findings pertaining to the lack of documented support for the compromises, we found that PCA contractors were still not providing sufficient support for compromises during the first 5 months of fiscal year 2003. Specifically, we found that 22 percent of the sampled compromised debts had no evidence that the PCA contractor had attempted to obtain a lump sum payment in full or a repayment agreement for the full amount prior to compromising the debt. For example, one debt involved a debtor who offered to pay the full debt balance of approximately $14,000 in installments. However, without explanation, the PCA contractor offered to compromise the debt by 20 percent if the debtor would pay right away. The debtor accepted the compromise offer. Moreover, this PCA contractor encouraged compromise activity prior to exhausting attempts to collect debts in full by sending out pro forma letters to debtors stating that the contractor may be authorized to compromise a portion of their debt should the debtor be in a position to pay the remaining balance. In addition, 72 percent of the compromised debts in our sample did not have supporting documentation indicating why the PCA contractors compromised the debts or the bases used to determine how these debts met Federal Claims Collection Standards criteria for compromise. For 81 percent of the compromised debts in our sample, PCA contractors did not have complete financial statements, and for 30 percent of the compromised debts, PCA contractors did not have credit bureau reports to support the compromises. It should be noted that a PCA contractor is required to submit to FMS the debtor’s financial statement and credit bureau report for review only if the compromise percentage of the debt exceeds the compromise percentage that is authorized by FMS or the referring agency. We found that for 36 of the 54 compromised debts in our sample, the PCA contractors compromised up to the amount that was allowed by FMS or the referring agencies. For example, one PCA contractor allowed a debtor to pay approximately $46,000 to settle a debt that had an outstanding balance of about $58,000. The forgiven amount fell within the compromise parameter that had been established by the referring agency. However, the PCA contractor did not (1) attempt to collect payment in full, (2) provide any explanation to justify the compromise, or (3) obtain the debtor’s complete financial statement and credit report. Because the PCA contractor did not exceed the compromise parameter established by the referring agency, it was able to compromise the debt without submitting the debtor’s financial statements and credit report to FMS for review. FMS officials stated that PCA contractors are required to document their attempts to obtain payment in full and justification for offering or accepting a compromise even when the compromise is within agency parameters. According to FMS officials, FMS discussed this issue with its PCA contractors in October 2002 and reiterated the importance of documenting the justification for compromising debts and obtaining financial statements and credit bureau reports to support the compromises. FMS officials stated that FMS would continue to look at compromise agreements in future PCA compliance reviews to help ensure that PCA contractors are providing justification and obtaining the financial statements and credit bureau reports necessary for entering into a compromise agreement. Moreover, FMS’s PCA contractors did not always attempt to obtain or report to FMS the TINs of debtors who were granted compromises. Specifically, we found that 17 percent of the compromised debts in our sample did not have TINs because the PCA contractors either did not request the TINs from the debtors or did not report the TINs to FMS.Without TINs for debtors, neither FMS nor the referring agencies were able to report the forgiven amounts of the compromised debts to IRS as income to the debtors. In addition, without a TIN, if the debtor defaults on the compromise agreement, the debt cannot be reported to TOP. According to FMS officials, FMS is continuing to monitor the compromise agreements made by its PCA contractors to help ensure that the contractors obtain and report TINs to FMS. In addition, as a result of our inquiries, FMS plans to issue a technical bulletin to its PCA contractors to remind them of the need to obtain and report TINs. DCIA requires Treasury to report to the Congress each year on the debt collection activities of federal agencies, including FMS as the government’s central debt collection agency. A key performance measure that FMS reports each year is the percentage of debt eligible for cross-servicing that has been referred by federal agencies. In fiscal year 2000, we reported that FMS did not properly calculate this key performance measure because the reported amount of debt referred for cross-servicing was not comparable to the reported amount of eligible debt. Specifically, FMS overstated the debt referral amount by accumulating the referred amount for about 3 and a half years. We recommended that FMS revise its reporting of debt amounts referred for cross-servicing to reflect the extent to which eligible debts reported by agencies as of a specific date have been referred to FMS. In its fiscal year 2002 report to the Congress, FMS reported that $7.9 billion, or 96 percent, of the $8.2 billion of eligible debt had been referred for cross- servicing as of fiscal year-end and cited the high referral rate as a notable accomplishment. However, FMS’s reports continue to overstate the progress made in this highly touted cross-servicing performance measure. Specifically, FMS understated debts that were eligible for cross-servicing and overstated debts that had been referred for cross-servicing, which significantly overstated the reported extent to which agencies had referred eligible debts for cross-servicing. As shown in table 1, the governmentwide cross-servicing referral rate at the end of fiscal year 2002 was about 79 percent, rather than 96 percent as reported by FMS. This is a significant difference given that FMS officials consider the cross-servicing program to be fully mature and federal agencies should be referring eligible debts when they are over 180 days delinquent. According to the TRORs for the fourth quarter of fiscal year 2002, federal agencies governmentwide had about $8.5 billion, not $8.2 billion, of debt eligible for referral at the end of the fiscal year. In determining the amount of eligible debt for referral for cross-servicing, FMS inappropriately used the amount of debt eligible for cross-servicing referral at the end of fiscal year 2001. As such, FMS did not include any of the approximately $300 million of debts that were reported as having become eligible for referral for cross-servicing during fiscal year 2002. Thus, FMS understated the amount of eligible debt for fiscal year 2002 by about $300 million. In addition, FMS noted in its fiscal year 2002 report to the Congress that the debts reported as referred for cross-servicing did not include those that were no longer being actively collected by FMS. However, FMS generally did not deduct from its reported referral amounts debts that were returned to the referring agencies during fiscal year 2002. According to FMS officials, FMS calculated the referral amount by adding debts that agencies referred to FMS during fiscal year 2002 to the amount of referred debt that FMS held for cross-servicing at the end of fiscal year 2001. FMS officials stated that they typically only reduced the referred debt amount when a debt was returned to the referring agency in the same month that the agency referred the debt to FMS. However, by not deducting the amount for all referred debts that were returned to agencies, the referred debt amount did not reflect the amount of debt that had been referred by agencies and was held by FMS for cross-servicing at fiscal year-end.According to FMS’s cross-servicing database, at the end of fiscal year 2002, FMS held about $6.7 billion of debts that had been referred by federal agencies for cross-servicing. In contrast, FMS reported $7.9 billion of debts referred for cross-servicing in its report to the Congress, an overstatement of about $1.2 billion. FMS continues to have opportunities for enhancing the effectiveness of its cross-servicing of delinquent nontax debt. Efficient and effective processes are needed for timely determining the next appropriate steps for debts that are not collected by FMS’s PCA contractors. As noted in our report, lack of adequate processes and systems weaknesses led to missed opportunities to refer cases to DOJ for enforced collection, failure to use payment offset tools for a large block of debt, and delays in decisions to stop collection efforts on old debt and report it to IRS as income for those who had not paid outstanding amounts. In addition, due to the lack of monitoring by FMS and OMB, there is no assurance that all eligible closed- out nontax debt is reported to IRS. These lapses in oversight and systematic administration of unpaid debts, combined with continuing problems in FMS’s PCA contractors’ administration of offers to forgive a portion of outstanding amounts as inducements to pay the remainder, perpetuate our concerns about FMS’s efforts to pursue and collect unpaid nontax debts. To help ensure that all appropriate collection action is taken on debts returned from FMS’s PCA contractors, we recommend that the Secretary of the Treasury direct the Commissioner of FMS to take the following actions: Identify debts kept in TOP for passive collection through the implementation of FedDebt and, in the interim, utilize appropriate analytical database software to identify such debts. Establish and implement procedures to periodically review debts that are kept in TOP for passive collection to determine the next best course of action for the debts to maximize collections or other recoveries. After all collection activities have been exercised, determine whether debts should be closed out and reported to IRS by FMS, and, if not, promptly return them to the referring agencies. Establish and implement procedures to periodically review debts that are kept in TOP for passive collection to determine whether the statute of limitations has expired or any other conditions, such as bankruptcy, exist that would prevent offset of the debts in TOP. Remove debts from TOP that are not eligible for offset and determine whether the debts should be closed out and reported to IRS or returned to the referring agency. Establish and implement procedures to periodically monitor debts that are held in inactive status to avoid debt backlogs and to help ensure that all debts are promptly reviewed to determine whether additional collection action or close-out and reporting to IRS is warranted. Monthly may be a reasonable interval for performing such monitoring. To help ensure that all federal agencies are appropriately reporting closed- out debts to IRS, we recommend that the Secretary of the Treasury direct the Commissioner of FMS to take the following actions: Require all federal agencies to disclose in their TRORs any significant differences between the amount of debt reported as closed out and the amount of debt reported to IRS and the reasons for those differences. Revise information requirements for the TROR to include the amount of CNC debts that are closed out. We also recommend that the Director of OMB direct the Controller of OMB’s Office of Federal Financial Management to remind agencies of their obligation to comply with the standards and policies of individual agencies for writing off and closing out debts, as required by the DCIA and OMB Circular A-129; require agencies to initiate actions to review and correct any deficiencies they find during their review; require agencies to report to OMB on their policies, deficiencies, and corrective actions, if any; and report annually to the Congress on the deficiencies, if any, found at the agencies and the progress in resolving any deficiencies found. To increase opportunities for collecting debts, we recommend that the Secretary of the Treasury direct the Commissioner of FMS to take the following actions: Revise the database query methodology FMS uses to identify cross- serviced debts for DOJ referral. The methodology should include debts kept in TOP for passive collection and should also incorporate information from FMS’s PCA contractors. Incorporate FMS’s criteria for selecting debts for DOJ referral in FMS’s PCA Operations and Procedures Manual. Remind PCA contractors of the importance of enforced collection and that their recommendation for next collection action, including litigation, is a critical part of their responsibilities, and inform the PCA contractors of the agencies that have authorized FMS to refer debts to DOJ on the agencies’ behalf. Establish and implement procedures to track all debts FMS has referred to DOJ and ensure that the FedDebt system is capable of tracking all debts that FMS refers to DOJ. Establish and implement procedures to monitor all debts in cross- servicing to help ensure that debts are promptly reported to TOP, including periodically sweeping the portfolio to send debts to TOP. Implement enhancements to the TOP system so that it can accept multiple debtors for a single debt, and ensure that the FedDebt system will be capable of being used to report secondary debtors to TOP. To help maximize the soundness of the cross-servicing program, we recommend that the Secretary of the Treasury direct the Commissioner of FMS to take the following actions: Establish procedures to monitor and track all debt amounts forgiven by in-house FMS collectors and ensure that the FedDebt system identifies the forgiven amounts for all compromise agreements established by in- house FMS collectors. Reinforce PCA contractors’ adherence to the compromise requirements set forth in the PCA contract for documenting the attempt to collect the full amount of a debt prior to its compromise. Reinforce PCA contractors’ adherence to the compromise requirements set forth in the Federal Claims Collection Standards for obtaining a debtor’s financial information, such as credit reports and complete financial statements, to determine the debtor’s inability to pay the full amount of the debt. Reinforce PCA contractors’ adherence to the compromise requirements set forth in the PCA contract for documenting the justification for the compromise of a debt. Incorporate liquidated damages or a penalty provision in the next PCA contract for failure of PCA contractors to document a compromise in accordance with contract requirements. Remind PCA contractors, through a technical bulletin or other means, of the importance of obtaining debtors’ TINs when compromising debts. Fully implement our recommendation made in fiscal year 2000 to revise FMS’s key performance measure on cross-servicing referrals so that the extent to which federal agencies have referred debts to cross-servicing directly corresponds to the eligible debts as of fiscal year-end. Specifically, the debt-eligible amount should reflect the amount reported by federal agencies as of fiscal year-end, and the debt-referred amount should reflect the amount in cross-servicing as of fiscal year-end. In written comments on a draft of this report, reprinted in appendix II, Treasury’s FMS said that it concurred with most of the findings and that many of the findings and recommendations had already been addressed. FMS stated that enhancements to the systems that serve cross-servicing and PCA functions have resolved a number of issues and that the advent of FedDebt will further improve cross-servicing operations. However, FMS raised a number of points regarding certain of our findings and recommendations that missed the central concerns conveyed in our report and tended to downplay the significance of these concerns. The following discussion highlights and responds to the points FMS raised. FMS stated that the findings in the report did not reflect critical operational issues and only affected a very small percentage of its cross-servicing portfolio. FMS expressed concern that we greatly expanded the scope of our work beyond the parameters that we originally set and focused on a range of opportunities to improve the cross-servicing program that had little or no relation to the reporting of uncollectible debt. We disagree. Specifically, referral of debts to DOJ for litigation and TOP for offset, monitoring of the compromise of debts by FMS and its PCA contractors, and identification and reporting of uncollectible debt amounts to IRS are all critical operational issues. Moreover, as discussed in the report, we found several problems related to FMS’s identification and monitoring of debts held in TOP for passive collection, which represented over half the debts in FMS’s $6.6 billion cross-servicing portfolio as of February 28, 2003. These issues, when considered in conjunction with issues we have cited in previous reports, such as limited implementation of administrative wage garnishment (AWG) and lack of independent verification of the accuracy, completeness, and validity of debts reported by agencies as eligible for or excluded from DCIA cross-servicing provisions, raise serious concerns about FMS’s progress in addressing the challenges it faces in implementing the cross-servicing program. We also disagree with FMS’s assertion that we expanded the scope of our review beyond what we conveyed to Treasury at the beginning of the assignment. In our August 2002 letter to the Secretary of the Treasury and our subsequent entrance conference with FMS officials in October 2002, we stated that our objectives were to evaluate (1) actions taken by FMS on uncollected nontax debts returned from its PCA contractors; (2) FMS’s efforts to ensure that eligible uncollectible nontax debts, which federal agencies rely on FMS to report on their behalf to IRS as income to the debtors, are promptly identified and accurately reported; and (3) actions taken, if any, by FMS to ensure that federal agencies are reporting their eligible uncollectible nontax debts to IRS as income to the debtors. As stated in our report, our review addressed these objectives. In addition, in performing our work to address these objectives, we identified opportunities for FMS to improve collection of nontax debts through cross- servicing and enhance the soundness of certain operational and reporting facets of its cross-servicing program. In meeting our audit responsibilities, we must inform management of any significant issues identified during our work. FMS suggests that our report unfairly characterizes FMS’s efforts to collect debts through offset as “minimal” and that it criticizes FMS for collection activities that agencies have not delegated to it. FMS stated that TOP is its most effective collection tool, many agencies rely on TOP for the bulk of their collections, and significant collection opportunities could be lost if debts were removed from TOP prematurely. FMS stated that since the cost to collect through TOP is low, it is generally in the best interest of the government to attempt offset for as long as statutorily authorized before terminating collections and discharging the debt. FMS said that it is at creditor agencies’ discretion to leave debts returned from PCA contractors in TOP for passive collection. We agree that for certain debts, TOP can be an effective mechanism for collection, especially when used in conjunction with other debt collection activities. However, passive collection does not entail any collection action other than minimal efforts through TOP. As stated in the report, for debts held in passive collection, TOP is the only collection tool in use. Therefore, collection opportunities from the use of other collection tools, such as litigation and AWG, are lost for these debts. As we state in this report, FMS had collected only about $9 million, or about two-tenths of 1 percent, of the $3.7 billion of debts held in TOP for passive collection as of February 28, 2003. To increase the opportunities to collect these debts, we recommended that FMS periodically review debts kept in TOP for passive collection to determine the next best course of action for the debts, such as AWG or litigation, to maximize collections or other recoveries. Moreover we did not recommend in our report that FMS remove debts from TOP prematurely. Rather, we stated that many of the debts kept in TOP for passive collection were unlikely to yield any collections through offsets because they were beyond the 10-year statutory and regulatory limitations applicable to offset or had other barriers, such as bankruptcy, that would prevent offset of the debts. Thus, we recommended that FMS establish and implement procedures to periodically review debts that are kept in TOP for passive collection to determine whether the statute of limitations has expired or any other conditions exist that would prevent offset of the debts and remove debts from TOP that are not eligible for offset and determine whether the debts should be closed out and reported to IRS or returned to the referring agency. We also disagree with FMS’s implication that we unfairly criticized FMS for not undertaking Form 1099C reporting activities that agencies have not delegated to it. Our review indicated that it would be highly unlikely for creditor agencies to be able to identify specific debts in cross-servicing that are kept in TOP for passive collection. FMS advised us that because of system limitations, it could not identify specific debts that are merely being held in passive collection after being returned from PCA contractors. However, we were able to readily identify debts in TOP for passive collection through use of off-the-shelf database analysis software. Without the ability to identify specific debts for which passive collection is the only current ongoing effort, creditor agencies that have not delegated authority to FMS to report uncollectible debts to IRS on their behalf cannot fulfill their responsibility to determine whether a debt should be closed out and reported to IRS or whether other collection action should be taken on it. We consider this to also be the responsibility of FMS. This view is embodied in our recommendations that FMS establish and implement procedures to periodically review debts that are kept in TOP for passive collection to determine the next best course of action and after all collection activities have been exercised, determine whether debts should be closed out and reported to IRS by FMS, and, if not, promptly return them to the referring agencies. In particular and as noted in our report, we would like to reemphasize that our analysis considered only those debts for which federal agencies had given FMS the authority to report uncollectible debt amounts to IRS on the agency’s behalf. For such debts, FMS procedures require its collectors to evaluate them to determine whether close-out would be appropriate and whether the debt amounts should be reported to IRS. FMS agreed with our finding that it had referred only a small amount of debt to DOJ. FMS stated that because of workload constraints, it has attempted to focus its DOJ referral efforts on cases most likely to be successfully collected through litigation. As stated in our report, in an effort to increase referrals to DOJ, FMS did begin to perform quarterly queries of its cross-servicing database to identify uncollected debts for referral to DOJ. However, we found that many of the debts identified through these queries would not be good candidates for referral to DOJ because, among other things, they lacked TINs and were involved in bankruptcy proceedings. In addition, these queries did not cover most debts in cross-servicing, including those held in TOP for passive collection that would seem to be better candidates for DOJ referral because they should have valid TINs and are not supposed to be in bankruptcy. In addition, FMS did not routinely consider or act on advice from its PCA contractors regarding referrals to DOJ. Because PCA contractors’ responsibilities include locating debtors and determining whether they have incomes or assets to repay delinquent debts, the PCA contractors would have a reasonable basis for identifying uncooperative debtors who could repay their debts but had refused. FMS did not agree with our recommendation to incorporate liquidated damages in the next PCA contract for failure of PCA contractors to document compromises in accordance with contract requirements. FMS stated that there is no incentive for a PCA contractor to accept a compromise agreement when the debtor has the capability to pay the full amount of the debt. We disagree with FMS’s contention that a PCA contractor would not accept a compromise agreement when the debtor has the capability to pay the full amount of the debt. For example, as stated in our report, we noted that one debtor offered to pay the full debt balance of approximately $14,000 in installments. However, without explanation, the PCA contractor offered to compromise the debt by 20 percent if the debtor would pay right away. Moreover, this PCA contractor encouraged compromise activity prior to exhausting attempts to collect debts in full by sending out pro forma letters to debtors stating that the contractor may be authorized to compromise a portion of their debt should the debtor be in a position to pay the remaining balance. Further, FMS stated that it is questionable whether liquidated damages or a penalty provision in the contract would be legally enforceable. For many of the debts that we reviewed, we found that the PCA contractors often did not have documentation to justify their rationale for concluding that debtors could not pay the full debt amount or to support the amounts forgiven. In the absence of adequate documentation supporting the PCA contractor’s determination to compromise a debt for a specific amount, FMS cannot determine whether the compromise is reasonable under the Federal Claims Collection Standards. Thus, FMS has no basis to determine whether the government suffered a loss that should not have been incurred as a result of such a compromise. To encourage PCA contractors to obtain adequate documentation supporting their compromises, we continue to believe that FMS should incorporate liquidated damages or a penalty provision in the next PCA contract for failure of PCA contractors to document compromises in accordance with contract requirements. FMS did not offer any legal analysis to support its assertion that a liquidated damage or penalty provision, presumably properly drafted and applied, may not be legally enforceable. Of course, the enforceability of liquidated damages or a penalty provision (e.g., reduction in the number of cases or amount of debt referred to the PCA contractor) would depend on the nature of the provision and the facts of the individual cases. FMS did not agree with our finding related to the cross-servicing referral performance measure. FMS stated that it considered many approaches for reporting agency performance and believed that the method it chose is fair and equitable. FMS said that using only the active balance on a given date (e.g., the end of the fiscal year) would not recognize debts that are paid off, administratively resolved, or determined to be uncollectible and closed out. FMS further stated that because CFO Act agencies were required to update their TRORs on a quarterly basis beginning in fiscal year 2003, eligible amounts of debt for calculating the percentages referred are now updated every quarter. This performance indicator is a snapshot of the percentage of debt eligible for referral to cross-servicing that has been referred at a given point in time, such as at year-end. In calculating its debt referral measure for fiscal year 2002, FMS made an unreasonable determination in computing this key performance measure even though it had all the appropriate information to properly calculate this figure. A fundamental premise in calculating this performance indicator is that debts that are paid off, administratively resolved, or determined to be uncollectible and closed out are no longer eligible for referral for cross-servicing and are not subject to further federal collection efforts. As such, FMS should not include these debts in the amount referred for cross-servicing in its annual fiscal year report to the Congress. In addition, as stated in the report, in its fiscal year 2002 report to the Congress, FMS inappropriately used the amount of debt eligible for cross-servicing referral at the end of fiscal year 2001 instead of the end of fiscal year 2002. The net effect of these errors on the calculation was to overstate the amount referred (the numerator of the fraction) by $1.2 billion and to understate the amount available for referral (the denominator of the fraction) by approximately $300 million. Both of these errors had the effect of overstating federal agencies’ progress in referring eligible nontax debts for cross-servicing. In its oral comments, OMB agreed with the report’s findings. In drafting the recommendation, we proposed that OMB review the standards and policies of individual agencies for writing off and closing out debts. In its oral response, OMB was concerned that it did not have the resources to review all federal agencies’ policies and procedures. As such, OMB suggested that we modify our proposed recommendation to instead require OMB to have individual federal agencies review their own policies and procedures for writing off and closing out debts and report to OMB on their policies, deficiencies, and corrective actions, if any, based on such reviews. OMB stated that it will then use these reports from the individual agencies to report to the Congress on the deficiencies, if any, found at the agencies and the progress in resolving such deficiencies. OMB’s suggested approach in resolving this finding is reasonable and fully meets the intent of our proposed recommendation. As such, we have modified our recommendation to OMB accordingly. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform within 60 days of the date of this report. You must also send a written statement to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made over 60 days after the date of this report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs; the Subcommittee on Financial Management, the Budget and International Security, Senate Committee on Governmental Affairs; the House Committee on Government Reform; the Subcommittee on Government Efficiency and Financial Management, House Committee on Government Reform; and the Commissioner of FMS. Copies will be made available to others upon request. The report is also available at no charge on GAO’s Web site, at http://www.gao.gov. If you have any questions regarding this report, please contact me on (202) 512-3406 or Kenneth Rupar, Assistant Director, on (214) 777-5714. Other key contributors to this report are listed in appendix III. To test debts compromised by the Financial Management Service’s (FMS) private collection agency (PCA) contractors from October 1, 2002, to February 28, 2003, we selected a stratified random sample of 54 debts that the PCA contractors compromised from a population of 358 debts in the cross-servicing database with forgiven dollar amounts of at least $2,000 but less than $100,000. We did not review debts with forgiven dollar amounts under $2,000 because they were deemed immaterial. In total, we selected 54 debts to review. (See table 2). The following are GAO’s comments on the Department of the Treasury’s letter dated October 20, 2003. 1. In conformity with generally accepted government auditing standards, we provide responsible agency officials and other directly affected parties with an opportunity to review and provide comments on a draft report before it is issued. The language referred to by FMS concerning the report’s status as a draft has been the standard language included on the cover page of GAO reports when they are sent for agency comment. After receiving agency comments, we consider their substance, revise the draft report as appropriate, state in the report whether the agency agreed or disagreed with our findings, conclusions, and recommendations, and issue the report. 2. See our discussion in the Agency Comments and Our Evaluation section. 3. See comment 2. 4. See comment 2. 5. See comment 2. 6. See comment 2. 7. The scope of our work did not include determining whether FMS’s TOP system has sufficient edits and safeguards in place to ensure that no offset is taken for debts over 10 years delinquent. 8. See comment 2. 9. As stated in our report, a scope limitation prevented us from using statistical sampling techniques to determine whether compromises made by in-house FMS collectors were justified, supported, and reported to IRS. As such, we cannot comment on whether FMS collectors have implemented compromise documentation procedures in accordance with previous GAO recommendations. 10. See comment 2. 11. See comment 2. Other key contributors to this assignment were Richard Cambosos, Matthew Valenta, Ronald Haun, Michelle Philpott, Evan Gilman, and Cathy Hurley. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
GAO has previously reviewed facets of Treasury's Financial Management Service's (FMS) cross-servicing efforts. These reviews did not include FMS's handling of nontax debts that were returned to FMS uncollected by its private collection agency (PCA) contractors because FMS officials did not consider the cross-servicing program to be fully mature. During fiscal years 2000, 2001, and 2002, FMS's PCA contractors returned about $3.9 billion of uncollected debts to FMS. This report focuses primarily on (1) actions taken by FMS on uncollected nontax debts returned from its PCA contractors and (2) actions taken, if any, by FMS and the Office of Management and Budget (OMB) to ensure that federal agencies are reporting their eligible uncollectible nontax debts to IRS as income to debtors. Although FMS has made progress in implementing its cross-servicing program and considers it to be fully mature, opportunities exist to improve the program. FMS had not reviewed most of the debts returned to it by its PCA contractors to determine whether any opportunities for collection or other recoveries remained, including those possible from reporting closed-out debts to IRS as income to debtors. For example, about $3.7 billion of the $6.6 billion of debts that were at FMS for cross-servicing as of February 28, 2003, were being kept in the Treasury Offset Program (TOP) for passive collection after they had been returned uncollected to FMS by PCA contractors. Passive collection entailed no further collection action on the part of FMS other than minimal efforts through offset, and collections on debts in passive collection through offset totaled only about $9 million through February 28, 2003. Various problems hindered collections through offset, including the fact that many of the debts were beyond the 10-year statutory and regulatory limitations for offset. GAO's analysis also showed that relatively few debts in cross-servicing were being referred to the Department of Justice for more aggressive enforced collection action. This analysis further showed that FMS continues to have problems with debt compromises and the reporting of a key cross-servicing performance measure. Finally, neither FMS nor OMB monitored or reported the extent to which federal agencies governmentwide were closing out all eligible uncollectible debts and reporting those amounts to IRS as income to debtors.
The implications of the current lack of clarity with regard to the term “significant impact” and the discretion that agencies have to define it were clearly illustrated in a report that we prepared for the Senate Committee on Small Business 2 years ago. One part of our report focused on a proposed rule that EPA published in August 1999 that would, upon implementation, lower certain reporting thresholds for lead and lead compounds under the Toxics Release Inventory program from as high as 25,000 pounds to 10 pounds. At the time, EPA said that the total cost of the rule in the first year of implementation would be about $116 million. The agency estimated that approximately 5,600 small businesses would be affected by the rule, and that the first-year costs of the rule for each of these small businesses would be from $5,200 to $7,500. However, EPA certified that the rule would not have a significant impact, and therefore did not trigger certain analytical and procedural requirements in the RFA. EPA’ determination that the proposed lead rule would not have a significant impact on small entities was not unique. Its four major program offices certified about 78 percent of the substantive proposed rules that they published in the 2 ½ years before SBREFA took effect in 1996, but certified 96 percent of the proposed rules published in the 2 ½ years after the act’s implementation. In fact, two of the program offices—the Office of Prevention, Pesticides and Toxic Substances and the Office of Solid Waste—certified all 47 of their proposed rules in this post-SBREFA period as not having a significant impact. The Office of Air and Radiation certified 97 percent of its proposed rules during this period, and the Office of Water certified 88 percent. EPA officials told us that the increased rate of certification after SBREFA’s implementation was caused by a change in the agency’s RFA guidance on what constituted a significant impact. Prior to SBREFA, EPA’s policy was to prepare a regulatory flexibility analysis for any rule that the agency expected to have any impact on any small entities. The officials said that this guidance was changed because the SBREFA requirement to convene an advocacy review panel for any proposed rule that was not certified made the continuation of the agency’s more inclusive RFA policy too costly and impractical. In other words, EPA indicated that SBREFA—the statute that Congress enacted to strengthen the RFA— caused the agency to use the discretion permitted in the RFA and conduct fewer regulatory flexibility analyses. EPA’s current guidance on how the RFA should be implemented includes numerical guidelines that establish what appears to be a high threshold for what constitutes a significant impact. Under those guidelines, an EPA rule could theoretically impose $10,000 in compliance costs on 10,000 small businesses, but the guidelines indicate that the agency can presume that the rule does not trigger the requirements of the RFA as long as those costs do not represent at least 1 percent of the affected businesses’ annual revenues. The guidance does not take into account the profit margins of the businesses involved or the cumulative impact of the agency’s rules on small businesses—even within a particular subject area like the Toxics Release Inventory. We have issued several other reports in recent years on the implementation of the RFA and SBREFA that, in combination, illustrate both the promise and the problems associated with the statutes. For example, in 1991, we examined the implementation of the RFA with regard to small governments and concluded that each of the four federal agencies that we reviewed had a different interpretation of key RFA provisions. We said that the act allowed agencies to interpret when they believed their proposed regulations affected small government, and recommended that Congress consider amending the RFA to require the Small Business Administration (SBA) to develop criteria regarding whether and how to conduct the required analyses. In 1994, we examined 12 years of annual reports prepared by the SBA Chief Counsel for Advocacy and said the reports indicated variable compliance with the RFA—a conclusion that the Office of Advocacy also reached in its 20-year report on the RFA. SBA repeatedly characterized some agencies as satisfying the act’s requirements, but other agencies were consistently viewed as recalcitrant. Other agencies’ performance reportedly varied over time or varied by subagency. We said that one reason for agencies’ lack of compliance with the RFA’s requirements was that the act did not expressly authorize SBA to interpret key provisions in the statute and did not require SBA to develop criteria for agencies to follow in reviewing their rules. We said that if Congress wanted to strengthen the implementation of the RFA, it should consider amending the act to (1) provide SBA with authority and responsibility to interpret the RFA’s provisions and (2) require SBA, in consultation with the Office of Management and Budget (OMB), to develop criteria as to whether and how federal agencies should conduct RFA analyses. In our 1998 report on the implementation of the small business advocacy review panel requirements in SBREFA, we said that the lack of clarity regarding whether EPA should have convened panels for two of its proposed rules was traceable to the lack of agreed-upon governmentwide criteria as to whether a rule has a significant impact. Nevertheless, we said that the panels that had been convened were generally well received by both the agencies and the small business representatives. We also said that if Congress wished to clarify and strengthen the implementation of the RFA and SBREFA, it should consider (1) providing SBA or another entity with clearer authority and responsibility to interpret the RFA’s provisions and (2) requiring SBA or some other entity to develop criteria defining a “significant economic impact on a substantial number of small entities.” In 1999, we noted a similar lack of clarity regarding the RFA’s requirement that agencies review their existing rules that have a significant impact within 10 years of their promulgation. We said that if Congress is concerned that this section of the RFA has been subject to varying interpretations, it may wish to clarify those provisions. We also recommended that OMB take certain actions to improve the administration of these review requirements, some of which have been implemented. Last year we issued two reports on the implementation of SBREFA. One report examined section 223 of the act, which required federal agencies to establish a policy for the reduction and/or waiver of civil penalties on small entities. All of the agencies’ penalty relief policies that we reviewed were within the discretion that Congress provided, but the policies varied considerably. Some of the policies covered only a portion of the agencies’ civil penalty enforcement actions, and some provided small entities with no greater penalty relief than large entities. The agencies also varied in how key terms such as “small entities” and “penalty reduction” were defined. We said that if Congress wanted to strengthen section 223 of SBREFA it should amend the act to require that agencies’ policies cover all of the agencies civil penalty enforcement actions and provide small entities with more penalty relief than other similarly situated entities. Also, to facilitate congressional oversight, we suggested that Congress require agencies to maintain data on their civil penalty relief efforts. The other report that we issued on SBREFA last year examined the requirement in section 212 that agencies publish small entity compliance guides for any rule that requires a final regulatory flexibility analysis under the RFA. We concluded that section 212 did not have much of an impact on the agencies that we examined, and its implementation also varied across and sometimes within the agencies. Some of the section’s ineffectiveness and inconsistency is traceable to the definitional problems in the RFA that I discussed previously. Therefore, if an agency concluded that a rule imposing thousands of dollars of costs on thousands of small entities did not trigger the requirements of the RFA, section 212 did not require the agency to prepare a compliance guide. Other problems were traceable to the discretion provided in section 212 itself. Under the statute, agencies can designate a previously published document as its small entity compliance guide, or develop and publish a guide with no input from small entities years after the rule takes effect. We again recommended that Congress take action to clarify what constitutes a “significant economic impact” and a “substantial number of small entities,” and also suggested changes to section 212 to make its implementation more consistent and effective. Two years ago we convened a meeting at GAO on the rule review provision of the RFA, focusing on why the required reviews were not being conducted. Attending that meeting were representatives from 12 agencies that appeared to issue rules with an impact on small entities, representatives from relevant oversight organizations (e.g., OMB and SBA’s Office of Advocacy), and congressional staff from the House and Senate committees on small business. The meeting revealed significant differences of opinion regarding key terms in the statute. For example, some agencies did not consider their rules to have a significant impact because they believed the underlying statutes, not the agency-developed regulations, caused the effect on small entities. There was also confusion regarding whether the agencies were supposed to review rules that had a significant impact on small entities at the time the rules were first published in the Federal Register or those that currently have such an impact. It was not even clear what should be considered a “rule” under the RFA’s rule review requirements—the entire section of the Code of Federal Regulations that was affected by the rule, or just the part of the existing rule that was being amended. By the end of the meeting it was clear that, as one congressional staff member said, “determining compliance with (the RFA) is less obvious than we believed before.” Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions.
The Regulatory Flexibility Act of 1980 (RFA) requires agencies to prepare an initial and a final regulatory flexibility analysis. The Small Business Regulatory Enforcement Fairness Act of 1996 (SBREFA) seeks to strengthen RFA protections for small entities, and some of the act's requirements are built on "significant impact." GAO has reviewed the implementation of RFA and SBREFA several times in recent years, with topics ranging from specific provisions in each statute to the overall implementation of RFA. Although both of these reforms have clearly affected how federal agencies regulate, GAO believes that their full promise has not been realized, and key questions about RFA remain unanswered. These questions lie at the heart of RFA and SBREFA, and their answers can have a substantive effect on the amount of regulatory relief provided through those statutes. Because Congress did not answer these questions when the statutes were enacted, agencies have had to develop their own answers, and those answers differ.
The Coast Guard is responsible for 11 statutory missions that are divided into non-homeland security and homeland security missions, as shown in table 1. The Homeland Security Act of 2002 requires that the authorities, functions, and capabilities of the Coast Guard to perform all of its missions be maintained intact and without significant reduction, except as specified in subsequent acts. It also prohibits the Secretary of Homeland Security from reducing “substantially or significantly…the missions of the Coast Guard or the Coast Guard’s capability to perform those missions.” The Coast Guard utilizes aircraft and vessels to conduct its 11 missions. The Coast Guard operates two types of aircraft—fixed-wing (airplanes) and rotary-wing (helicopters), including its new C-27J aircraft–and two types of vessels–cutters and boats. A cutter is any vessel 65 feet in length or greater, having adequate accommodations for crew to live on board. Larger cutters (major cutters), over 179 feet in length, include the National Security Cutter and the High and Medium Endurance Cutters. Cutters from 65 to 175 feet in length include Patrol Cutters such as the Fast Response Cutter and the 110-foot Patrol Boat, among others. In contrast, all vessels less than 65 feet in length are classified as boats and usually operate closer to shore and on inland waterways. As of the end of fiscal year 2015, Coast Guard assets included 61 fixed-wing aircraft, 142 rotary-wing aircraft, 40 major cutters, 205 cutters, and 1,750 boats. Figure 1 shows three of the Coast Guard’s newest assets. The Coast Guard began a 30-year recapitalization effort in the late 1990s to modernize its aircraft and vessel fleets by rebuilding or replacing assets. Figure 2 provides a timeline of key events and related acquisition studies and reports in this recapitalization program, which was formerly known as the Deepwater Program. As part of its recapitalization effort, in 1998, the Coast Guard created the Deepwater Program baseline to reflect asset performance levels at that time and to serve as a basis for developing performance goals for the acquisition of new assets that were to replace certain legacy assets. However, a performance gap analysis conducted in 2002 determined the revised asset mix, as designed by the recapitalization program, would have significant capability gaps in meeting emerging mission requirements following the September 11, 2001, terrorist attacks. As a result, the Coast Guard completed a Mission Needs Statement in 2005 to incorporate the additional capabilities and subsequently updated the annual resource hours needed to meet its increased mission demands. In 2007, based on the 2005 Mission Needs Statement, DHS approved a program of record for all of the Coast Guard’s major acquisition programs at an estimated cost of $24.2 billion. This program of record delineated the specific number of aircraft and vessels the Coast Guard planned to acquire to meet the annual resource hours outlined by the 2005 Mission Needs Statement baseline. Further, as part of its recapitalization efforts, the Coast Guard submits an annual 5-year Capital Investment Plan Report to Congress that includes, among other things, projected funding for capital assets in such areas as acquisition, construction, and improvements. In 2016, the Coast Guard again revised its Mission Needs Statement in response to statutory requirements and committee report language, but, this revision states it was not intended to provide details on the specific assets the Coast Guard needs to meet its mission requirements. Further, according to the Coast Guard, the 2016 update to the Mission Needs Statement is to provide a foundation for long-term investment planning that is to culminate with detailed modeling scenarios to evaluate the effectiveness of various fleet mixes, and inform the Coast Guard’s Capital Investment Plan. Since the 2016 revision does not identify specific assets or resource hours necessary to meet the Coast Guard’s mission requirements, the 2005 Mission Needs Statement remains the baseline document outlining the Coast Guard’s mission needs and the resource hours per asset necessary to achieve them. Since fiscal year 2008, the Coast Guard has used the Standard Operational Planning Process for annually developing and communicating strategic commitments and allocating resource hours, by asset type (i.e., aircraft, cutters, and boats), throughout its chain of command for meeting mission responsibilities. As part of the Standard Operational Planning Process, Coast Guard headquarters annually issues a Strategic Planning Direction, which is to be the primary mechanism for allocating asset resource hours and providing strategic direction to field commands. Resource hours are subsequently allocated by asset type at the Area, District, and Sector levels for meeting strategic commitments and executing the 11 statutory missions. After assets are deployed, field unit personnel are to record resource hours used by Coast Guard assets to accomplish missions, such as domestic ice breaking or marine environmental protection operations. These asset resource hours are input into one of two operational reporting databases–the Asset Logistics Maintenance Information System (ALMIS) or the Abstract of Operations System (AOPS). After the data have been entered, the Coast Guard Business Intelligence system is used to extract and combine asset resource hour and performance data each quarter to create Operational Performance Assessment Reports. The historical and current-year data on asset operational hours used, by mission, from these reports, as well as Planning Assessments, are to be communicated back to Coast Guard headquarters and incorporated into the Standard Operational Planning Process to inform asset hour allocations in the Strategic Planning Direction for the following year. Since the Coast Guard developed acquisition plans for its Deepwater recapitalization program, many of the assumptions that initially informed these plans, including the 2005 Mission Needs Statement baseline for those assets, have changed and are no longer accurate, as we reported in June 2014 and May 2015. While the Coast Guard is continuing to acquire and deploy new assets each year, the Coast Guard operated assets in fiscal year 2015 below the baseline level of resource hours outlined for these assets in the 2005 Mission Needs Statement. For example, in fiscal year 2015, a mix of new and legacy Patrol Cutters, including new Fast Response Cutters, used 82,233 resource hours of the 174,000 resource hours specified in the 2005 baseline—a 52 percent difference. The asset resource hours used in fiscal year 2015 were below the 2005 baseline level, in part, because not all of the new assets planned as part of the 2005 baseline were deployed and fully operational by fiscal year 2015. In addition, as we have previously reported, the Coast Guard continues to operate many of its legacy assets, which do not always achieve their expected operational capacities. Specifically, some legacy cutters are up to 50 years old and are expected to be in operation for several more years until the replacement cutters can be deployed. We have also reported that the Coast Guard has experienced delays in acquiring some of its planned assets and some of the Coast Guard’s new assets that have been deployed have faced operational challenges. Nevertheless, because of changes in the assumptions underlying the 2005 Mission Needs Statement baseline, it may not accurately reflect the Coast Guard’s current needs, specifically (1) the planned fleet mix of aircraft and vessels has changed, and (2) the planned operational capacities of these new assets have, in some cases, been revised downward. See Appendix I for more information on the Coast Guard asset baselines and actual resource hours used in fiscal year 2015, as well as changes to its planned fleet mix and operational capacities over time. The Coast Guard’s planned aircraft and vessel fleet mix has changed since the 2005 Mission Needs Statement baseline was developed. For example, in 2005, the Coast Guard planned for the acquisition of HC-144 and HC-130 aircraft for its fixed-wing aircraft fleet. However, we reported in March 2015 that the unexpected transfer of C-27J aircraft from the Department of Defense in December 2013 represented a significant change to this aircraft fleet mix. As a result of this change, the Coast Guard decreased its planned acquisition of HC-144 aircraft. In another example, with regard to its aircraft fleet, the Coast Guard initially planned for fixed-wing Unmanned Aerial Vehicles and Vertical Take-Off and Landing Unmanned Air Vehicles in the 2005 baseline, but, as of May 2016, Coast Guard officials stated these unmanned assets have not yet been acquired. For the major cutter fleet, the Coast Guard had planned for 8 National Security Cutters and 25 Offshore Patrol Cutters to replace the legacy fleet of High and Medium Endurance Cutters in its 2005 Mission Needs Statement baseline. However, Congress recently provided the Coast Guard with funding for a ninth National Security Cutter as part of the Consolidated Appropriations Act, 2016, representing an unanticipated addition to its planned major cutter fleet. The expected operational capacities planned for assets in the 2005 Mission Needs Statement baseline have, in several cases, been subsequently revised downward to reflect more realistic and achievable operational targets. For example, regarding fixed-wing aircraft, the Coast Guard originally planned for each HC-144 aircraft to operate 1,200 flight hours per year. However, we reported in March 2015 that the Coast Guard had decided to reduce the HC-144 flight hours from 1,200 hours to 1,000 hours per year due primarily to the high cost of maintaining the aircraft at the 1,200-hour per year pace. For patrol cutters, the 2005 Mission Needs Statement baseline planned for each Fast Response Cutter to operate for 3,000 hours per year. However, the Coast Guard’s April 2016 report to Congress on its capital investments states that the planned resource hours for each Fast Response Cutter is 2,500 hours per year—a reduction of 500 hours per cutter from the 2005 baseline. Further, for major cutters, the Coast Guard’s 2005 baseline planned for each National Security Cutter and Offshore Patrol Cutter to operate at 4,140 resource hours per year—equivalent to 230 days away from home port—using a crew rotation concept. However, in March 2015, we reported that because of certain risk factors, uncertainty exists regarding the Coast Guard’s ability to achieve this operational capacity. We recommended that the Coast Guard specify mitigation actions to effectively address risk factors identified in the report, such as when and how National Security Cutter maintenance requirements could be completed within the 135 days allocated under the crew rotational concept. DHS concurred with the recommendation and, in March 2016, it stated that the Coast Guard was developing various testing plans and would submit a final crew rotation concept plan to Congress by December 2017, in response to requirements in the Coast Guard and Maritime Transportation Act of 2012. Moreover, we noted in our March 2015 report that these same risk factors may also affect the planned operational capacity of the Offshore Patrol Cutters, which are still under development. In its simplest form, a business case requires a balance between the concept selected to satisfy mission needs and the resources needed to transform the concept into a set of products, in this case aircraft and vessels. For the past 6 years, we have consistently found that there is a significant difference between the funding the Coast Guard estimates it needs to carry out its program of record for its major acquisitions and what it has traditionally requested and received through annual appropriations. To date, the Coast Guard’s attempts to address this difference by establishing its future fleet’s mission needs within reasonable budget constraints have been unsuccessful. For example, in September 2012, we reported that the Coast Guard had completed two efforts (Fleet Mix Phases One and Two) to reassess the mix of assets that comprised its former Deepwater program, but both efforts used its 2005 Mission Needs Statement and 2007 program of record as the basis of the analysis and did not consider realistic fiscal constraints. In particular, the Coast Guard began Fleet Mix Phase One in 2008 that considered the 2007 program of record to be the “floor” for asset capabilities and quantities and did not impose cost constraints. Consequently, the results were not used as a basis for trade-off decisions. In the second effort, Fleet Mix Phase Two, the Coast Guard analyzed how long it would take to buy the program of record under two different funding constraints: (1) an upper bound of $1.64 billion per year and (2) a lower bound of $1.2 billion per year. However, both scenarios are greater than the Coast Guard’s last four budget requests, indicating the upper bound funding level is unrealistic and the lower bound is optimistic. Further, the analyses did not assess options lower than the current program of record. Therefore, neither of these analyses prepared the Coast Guard to make the trade-offs required to develop a solid business case that matched its needed capabilities with anticipated resources. Instead of developing a solid business case, we reported in June 2014 that the Coast Guard is shaping its asset capabilities through the budget process. As the Coast Guard has faced fiscal constraints in recent years, this has led to asset capability gaps. As a result, the Coast Guard does not have a long-term plan that demonstrates how it will maintain today’s service level and meet identified needs. For example, the Coast Guard has already experienced a gap in heavy icebreaking capability and is falling short of meeting current and future major cutter operational hours. While some of these operational capability gaps are being filled through Congressional appropriations that exceed Coast Guard budget requests and transfers of assets from other agencies, the Coast Guard is likely to continue to face similar shortfalls and gaps while the Offshore Patrol Cutter fleet, estimated to absorb about two-thirds of the Coast Guard’s acquisition funding from 2018 until 2034, is being built. During this time, the Coast Guard faces other recapitalization needs—such as rebuilding the 87-foot patrol boat fleet, the MH-60 and MH-65 helicopter fleets, and possibly extending the service lives of the 270-foot Medium Endurance Cutters, among many other projects—that it may not be able to fund with its remaining budget. Office of Management and Budget, Department of Homeland Security, and Coast Guard efforts are underway to address these funding gaps, but to date, these efforts have not led to the difficult trade-off decisions needed to create a solid business case and improve the affordability of the Coast Guard’s proposed fleet mix. We recommended in June 2014, that the Coast Guard develop a 20-year fleet modernization plan that identifies all acquisitions needed to maintain the current level of service—aircraft and vessels—and the fiscal resources needed to buy the identified assets. We further recommended that the plan should consider trade-offs if the fiscal resources needed to execute the plan are not consistent with annual budgets. The Coast Guard concurred with our recommendation, but its response did not fully address our concerns or set forth an estimated date for completion. As of June 2016, the Coast Guard has yet to complete this plan. Without such a plan, it will remain difficult for the Coast Guard to fully understand the extent to which future needs match the current level of resources and its expected performance levels—and capability gaps—if funding levels remain constant. In addition to the 20-year fleet modernization plan, we have made several recommendations in recent years for the Coast Guard to improve its recapitalization business case by, among other things, identifying the cost, capabilities, and quantity and mix of assets needed; as well as the trade-offs necessary to meet fiscal constraints. Specific recommendations include the following: In March 2015, we recommended that the Coast Guard inform Congress of the time frames and key milestones for publishing revised annual flight hour needs for fixed-wing aircraft, as well as the corresponding changes to the composition of its fixed-wing fleet to meet these needs. In September 2012, we recommended that the Commandant of the Coast Guard conduct a comprehensive portfolio review to develop revised baselines that reflect acquisition priorities and realistic funding scenarios. In July 2011, we recommended that the Secretary of Homeland Security develop a working group that includes participation from DHS and the Coast Guard’s capabilities, resources, and acquisition directorates to review the results of multiple studies—including Fleet Mix Phases One and Two and DHS’s cutter study—to identify cost, capability, and quantity trade-offs that would produce a program that fits within expected budget parameters. The Coast Guard concurred with these recommendations, but is still in the process of addressing all recommendations, except the 2011 recommendation that they chose not to implement. For example, the Coast Guard is currently conducting a fleet-wide analysis—including aircraft, vessels, and information technology—intended to be a fundamental reassessment of the capabilities and mix of assets the Coast Guard needs to fulfill its missions. The Coast Guard is undertaking this effort consistent with direction from Congress and expects to have it completed to inform the fiscal year 2019 President’s Budget. Coast Guard officials stated that their efforts will help them to respond to a number of recent legislative mandates, which include the following: Fixed-Wing Aircraft Fleet Mix Analysis: This is to include a revised fleet analysis of the Coast Guard’s fixed-wing aircraft and is due in September 2016. Rotary-wing Contingency Plan: This plan is to address the planned or unplanned losses of rotary wing airframes; to reallocate resources as necessary to ensure the safety of the maritime public nationwide; and to ensure the operational posture of Coast Guard units. This plan is due in February 2017. Long-Term Acquisition Plan: This plan is to be a 20-year Capital Investment Plan that describes for the upcoming fiscal year and for each of the 20 fiscal years thereafter, such information as the numbers and types of legacy aircraft and vessels to be decommissioned; the numbers and types of aircraft and vessels to be acquired; and the estimated level of funding in each fiscal year required to acquire the cutters and aircraft, as well as related command, control, communications, computer, intelligence, surveillance, and reconnaissance systems and any changes to shoreside infrastructure. These plans are to be produced every other year to provide an update on the status of all major acquisitions. Mission Needs Statement: On the date on which the President submits to Congress a budget for fiscal year 2019 and every 4 years thereafter, the Commandant is to submit an integrated major acquisition need statement which, among other things, is to identify current and projected gaps in Coast Guard capabilities using specific mission hour targets and explain how each major acquisition program addresses gaps identified in Capital Investment Plan reports to be provided to Congress. Concept of Operations: This document is to be used in conjunction with the Mission Needs Statement as a planning document for the Coast Guard’s recapitalization needs. It is to determine the most cost- effective method of executing mission needs by addressing (1) gaps identified in the Mission Need Statement, (2) the funding requirements proposed in the 5-year Capital Investment Plan, and (3) options for reasonable combinations of alternative capabilities of aircraft and vessels, to include icebreaking resources and fleet mix. This document is due in September 2016. In May 2016, we reported that Coast Guard headquarters does not provide field units with realistic goals for allocating assets, by mission. Rather, headquarters’ allocations of assets in the annual Strategic Planning Directions that we reviewed for fiscal years 2010 through 2016 were based on assets’ maximum performance capacities. For example, the Strategic Planning Directions allocated each Hercules fixed-wing aircraft 800 hours per year, each Jayhawk helicopter 700 hours per year, and each 210-foot or 270-foot Medium Endurance Cutter 3,330 hours per year, irrespective of the condition, age, or availability of these assets. As a result, we found that, as shown in figure 3, the asset resource hours allocated in the Strategic Planning Directions have consistently exceeded the asset resource hours actually used by Coast Guard field units during fiscal years 2010 through 2015. For example, in fiscal year 2015, the Strategic Planning Direction allocated a total of 1,075,015 resource hours for field unit assets whereas the actual asset resource hours used was 804,048 hours, or about 75 percent of the allocated hours for that year. Coast Guard field unit officials we spoke with, and Coast Guard planning documents we reviewed for our May 2016 report, indicated that the Coast Guard is not able to achieve the resource hour allocation capacities set by the headquarters’ Strategic Planning Directions for several reasons, including the declining condition of legacy assets and unscheduled maintenance. Further, we also reported that our review of Coast Guard planning documents and discussions with field unit officials showed that Operational Planning Directions developed by field unit commands can differ from headquarters’ Strategic Planning Directions. For example, officials from one district told us on the basis of their analyses, they determined that their district could realistically use only about two-thirds of the performance capacity hours allocated by the Strategic Planning Direction for boats for one mission. In response to our findings, we recommended that the Coast Guard more systematically incorporate field unit input to inform more realistic asset allocation decisions—in addition to asset maximum capacities currently used—in the annual Strategic Planning Directions to more effectively communicate strategic intent to field units. The Coast Guard concurred with our recommendation and stated that it was taking actions to better incorporate field unit input for fiscal year 2017. If implemented as planned, this would meet the intent of this recommendation. In May 2016, we also reported that the Coast Guard does not maintain documentation on the extent to which risk factors have affected the allocation of asset resource hours to missions through its Strategic Planning Directions. For example, Coast Guard officials told us that the Coast Guard conducts a National Maritime Security Risk Assessment every 2 years to inform its asset allocations; however, the Coast Guard does not document how these risk assessments have affected asset allocation decisions across its missions. Coast Guard officials stated that changes made to Strategic Planning Directions’ asset allocations, by mission, are discussed in verbal briefings but it is not their practice to maintain documentation on the extent to which risk factors affect asset allocation decisions. Without documenting this, the Coast Guard lacks a record to help ensure that its decisions are transparent and the most effective ones for fulfilling its missions given existing risks. We recommended that the Coast Guard document how risk assessments conducted are used to inform and support annual asset allocation decisions. The Coast Guard concurred with our recommendation and stated that it will begin to document these decisions in its fiscal year 2017 Strategic Planning Direction. If implemented as planned, this would meet the intent of this recommendation. In May 2016, we reported that the Coast Guard is taking steps to improve its asset allocation process. The actions include the following: Improving data quality for resource hours assigned to each mission: Coast Guard guidance states that its field units should report at least one primary employment category, such as one of the 11 statutory missions, for the time an asset is deployed. Coast Guard officials told us that data on resource hours, by mission, for all assets may not be accurate because the Coast Guard does not have a systematic way for field units to (1) record time spent on more than one mission during an asset’s deployment or (2) consistently account for time assets spend in transit to designated operational areas. For example, officials from six of the nine Coast Guard districts we interviewed told us that they generally record one mission per asset deployment, even though each asset’s crew may have performed two or more missions during a deployment. Officials from the remaining three districts told us that if their assets’ crews perform more than one mission per deployment, the crews generally apportion the number of hours spent on each mission performed. Coast Guard officials stated that the resource hour data were accurate enough for operational planning purposes, and that they were in the process of determining how best to account for time spent by assets on multiple missions and in transit in order to obtain more accurate and complete data on the time assets spend conducting each of its missions. For example, in April 2014, the Coast Guard issued instructions to its field units to provide definitions, policies, and processes for reporting their operational activities and also established a council to coordinate changes among the various operational reporting systems used by different field units. Tracking how increased strategic commitments affect resource hours available: According to Coast Guard officials, the Strategic Planning Directions’ allocations of certain asset hours in support of strategic commitments have grown from fiscal year 2010 to fiscal year 2016. Headquarters and field unit officials we met with told us that it has become increasingly difficult to fulfill these growing strategic commitments when asset performance levels have generally remained the same or declined in recent years. Further, in February 2015, the Coast Guard Commandant testified before a congressional subcommittee that the Coast Guard’s mission demands continue to grow and evolve and that given the age and condition of some of its legacy assets, the success of future missions relies on the continued recapitalization of Coast Guard aircraft, cutters, boats, and infrastructure. To meet these challenges, the Coast Guard is taking steps to provide more transparency regarding asset resource hours needed to support strategic commitments and the remaining resource hours available to field unit commanders. For example, starting in fiscal year 2015, the Coast Guard began using a new data field to track the time assets spent supporting its Arctic strategy. In conclusion, given that many of the assumptions underlying the Coast Guard’s acquisition plans have changed since 2005 and are no longer accurate, and the importance of ensuring that limited acquisition resources are invested as efficiently and effectively as possible, the Coast Guard should continue to follow through with our recommendations to identify the cost, capability, and quantity of its fleet mix, as well as the trade-offs that would need to be made given fiscal constraints. Furthermore, to ensure that assets are deployed consistent with Coast Guard mission priorities, the Coast Guard should follow through with implementing our prior recommendations to improve its annual resource allocation process. Chairman Hunter, Ranking Member Garamendi, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Jennifer Grover at (202) 512-7141 or groverj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Christopher Conrad (Assistant Director), Nancy Kawahara (Analyst-in-Charge), Bryan Bourgault, John Crawford, Tracey Cross, Dominick Dale, Michele Fejfar, Laurier Fish, Eric Hauswirth, Tracey King, Michele Mackin, and Katherine Trimble. Key contributors for the previous work that this testimony is based on are listed in each product. The following figures detail the (1) actual number of asset resource hours utilized in fiscal year 2015 and (2) the expected, planned operational capacity baseline in varying years by each major asset category (fixed- wing aircraft, rotary-wing aircraft, major cutters, and patrol cutters). The 2005 baseline was updated from the 1998 baseline to reflect the changes in the Coast Guard’s mission as a result of the additional homeland security missions it was tasked with after 9/11. The actual number of asset resource hours utilized is generally lower than the baselines for a variety of reasons; including, among other things, the fact that not all assets were planned to be acquired and operational by fiscal year 2015.
Following the terrorist attacks of September 11, 2001, the Coast Guard has been charged with expanded security-related missions. Constrained budgets in recent years have underscored the importance of ensuring that the Coast Guard has the proper mix of assets and that it can effectively allocate these assets to achieve its missions. In recent years, the Coast Guard has begun to deploy new assets, and has taken actions to assess what assets it needs to carry out its missions and how to best allocate its current assets. However, the Coast Guard continues to face decisions about what assets it needs and how to best allocate these assets to meet its mission responsibilities. This statement addresses the Coast Guard's (1) mission needs, and (2) process for allocating asset resource hours across missions and units. This testimony is based on GAO's May 2016 report on the Coast Guard's allocation of assets, and GAO's body of work over the past 6 years on Coast Guard major acquisitions, as well as selected updates obtained in May 2016. For the selected updates, GAO reviewed Coast Guard documentation and analyzed fiscal year 2015 data on Coast Guard asset resource hour utilization, which GAO found to be sufficiently reliable for the purposes of this testimony statement. Since the U.S. Coast Guard developed acquisition plans for its asset recapitalization program, many of the assumptions that initially informed these documents, including its 2005 Mission Needs Statement baseline, are no longer accurate. For example, in March 2015, GAO reported that the Coast Guard received an unexpected transfer of 14 C-27J aircraft from the Air Force, representing a significant change to its aircraft fleet mix. In addition, Congress recently provided the Coast Guard with funding for a ninth National Security Cutter—one more than it had planned for in 2005. Further, the Coast Guard has reduced the operational capacities of several assets to reflect more realistic and achievable operational targets. For example, the Coast Guard reduced the operational capacity of the Fast Response Cutter from 3,000 hours per vessel per year to 2,500 hours. GAO has also consistently found that there is a significant difference between the funding the Coast Guard estimates it needs for its major acquisitions and what it has traditionally requested and received. The Coast Guard's attempts to address this difference by establishing its future fleet's mission needs within reasonable budget constraints have been unsuccessful. GAO has made several recommendations for the Coast Guard to improve its recapitalization business case, including that the Coast Guard develop a 20-year fleet modernization plan that identifies all acquisitions needed to maintain the current level of service and the fiscal resources needed to acquire them. The Coast Guard concurred with the recommendation and has actions underway, but has not completed this plan. Given that key changes have taken place since 2005, the Coast Guard should continue to take steps to address GAO's recommendations. GAO reported in May 2016 that the Coast Guard uses the Standard Operational Planning Process to annually allocate asset resource hours to field units for meeting missions, but the headquarters' Strategic Planning Directions used in this process do not provide field units with strategic, realistic goals. Rather, headquarters' Strategic Planning Directions allocate maximum resource hour capacities for each asset. These allocations have consistently exceeded actual asset resource hours used by field units. GAO recommended, among other things, that the Coast Guard more systematically incorporate field unit input to inform more realistic asset allocation decisions—in addition to asset maximum capacities currently used—in the annual Strategic Planning Directions to more effectively communicate strategic intent to field units. The Coast Guard concurred with GAO's recommendation and stated that it was taking actions to better incorporate field unit input for fiscal year 2017. GAO is not making any new recommendations in this statement.
The test results we received are misleading and of little or no practical use to consumers. Comparing results for 15 diseases, we made the following observations: (1) each donor’s factual profile received disease risk predictions that varied across all four companies, indicating that identical DNA can yield contradictory results depending solely on the company it was sent to for analysis; (2) these risk predictions often conflicted with the donors’ factual illnesses and family medical histories; (3) none of the companies could provide the donors who submitted fictitious African American and Asian profiles with complete test results for their ethnicity but did not explicitly disclose this limitation prior to purchase; (4) one company provided donors with reports that showed conflicting predictions for the same DNA and profile, but did not explain how to interpret these different results; and (5) follow-up consultations offered by three of the companies provided only general information and not the expert advice the companies promised to provide. The experts we spoke with agreed that the companies’ claims and test results are both ambiguous and misleading. Further, they felt that consumers who are concerned about their health should consult directly with their physicians instead of purchasing these kinds of DTC genetic tests. See appendix I for comprehensive information on the test results we received for each donor. Different companies often provide different results for identical DNA: Each donor received risk predictions for the 15 diseases that varied from company to company, demonstrating that identical DNA samples produced contradictory results. Specifically, in reviewing the test results across all four companies for the donors’ factual profiles, we found that Donor 1 had contradictory results for 11 diseases, Donor 2 for 9 diseases, Donor 3 for 12 diseases, Donor 4 for 10 diseases, and Donor 5 for 9 diseases. Specific examples of these contradictory predictions are listed below; note that some of the diseases we compared were only tested by three of the four companies. To facilitate comparison among companies, we chose to use the terms “below average,” “average,” and “above average” to describe the risk predictions we received; the exact language used by each of the companies is reprinted in appendix I. For Donor 1, Company 1 predicted an above-average risk of developing leukemia, while Company 2 predicted a below-average risk, and Company 3 reported that she had an average risk for developing the disease. In addition, Companies 2 and 4 told the donor that her risk for contracting breast cancer was above average, but Companies 1 and 3 found her only to be at average risk. See figure 1. Companies 1 and 2 claimed that Donor 2 had an above-average risk of developing type 1 diabetes, while Company 3 reported that she was at below-average risk for the disease. Further, Company 2 predicted she was at above-average risk for restless leg syndrome, Company 1 claimed she was at below-average risk for the condition, and Company 4 found that she was at average risk. See figure 2. Company 4 claimed that Donor 3’s risk of developing prostate cancer was above-average, Company 3 found that he was at below-average risk, and Companies 1 and 2 found that he was at average risk. For hypertension, Company 3 found that he had an above-average risk of developing the condition, Company 2 found that he was at below- average risk, and Company 1 found he was at average risk. See figure 3. Donor 4 was told by Companies 1 and 4 that he was at above-average risk for celiac disease, but Company 2 reported that he was only at average risk. In addition, Companies 1 and 4 found that he was at below-average risk for multiple sclerosis, while Companies 2 and 3 found that he was at average risk. See figure 4. For Donor 5, Companies 2 and 3 reported an above-average risk for heart attacks, and Companies 1 and 4 identified only an average risk. Company 2 found him to be at below-average risk for atrial fibrillation, while Companies 1, 3, and 4 predicted an average risk. See figure 5. These contradictions can be attributed in part to the fact that the companies analyzed different genetic “markers” in assessing the donors’ risk for disease. As described in a recent article published in the science journal Nature, researchers determine which markers occur more frequently in patients with a specific disease by conducting “genome-wide association studies, which survey hundreds of thousands or millions of markers across control and disease populations.” DTC companies use these publicly available studies to decide which markers to include in their analyses, but none of the companies we investigated used the exact same markers in its tests. For example, Company 1 looked at 5 risk markers for prostate cancer, while Company 4 looked at 18 risk markers. In our post-test interviews, representatives from all four companies acknowledged that, in general, DTC genetic test companies test for different risk markers and that this could result in companies having different results for identical DNA. When we asked the representatives whether they thought that any DTC genetic test companies currently on the market were more accurate than others, all claimed that their own companies’ tests were better than those offered by their competitors. For example, Company 1 said that it offers consumers more information than other companies because its results are based on both preliminary research reports as well as clinical data. Company 2 claimed that other companies do not test for as many markers as it does and that while none of the companies are “wrong,” using more markers is “probably more accurate.” Company 2 also stated that disparate test results from different companies are “caused, in part, due to a lack of guidance from the federal government, CDC in particular.” Company 3 similarly claimed to test for more markers than other companies and stated that its test is “the best.” Company 3 also said that there is a movement within the DTC genetic test industry to standardize test results, but that such standardization is a work in progress. Finally, Company 4 claimed that it uses stricter criteria to select risk markers than other companies. Company 4 also told us that it has been involved in a collaborative effort with other DTC genetic test companies to develop standard sets of markers, but stated that there are many unresolved differences in philosophy and approach. When we asked genetics experts if any of the companies’ markers and disease predictions were actually more accurate than the others, they told us that there are too many uncertainties and ambiguities in this type of testing to rely on any of the results. Unlike well-established genetic testing for diseases like cystic fibrosis, the experts feel that these tests are “promising for research, but the application is premature.” In other words, “each company’s results could be internally consistent, but not tell the full story…. the science of risk prediction based on genetic markers is not fully worked out, and that the limitations inherent in this sort of risk prediction have not been adequately disclosed.” As one expert further noted, “the fact that different companies, using the same samples, predict different…directions of risk is telling and is important. It shows that we are nowhere near really being able to interpret .” We also asked our experts if any of our donors should be concerned if the companies all agreed on a risk prediction; for example, all four companies told Donor 1 she was at increased risk for Alzheimer’s disease. The experts told us this consensus means very little because there are so many demographic, environmental, and lifestyle factors that contribute to the occurrence of the types of diseases tested by the four companies. Risk predictions sometimes conflict with diagnosed medical conditions or family history: Four of our five donors received test results that conflicted with their factual medical conditions and family histories. When we asked the experts about these discrepancies, they told us that the results from these DTC tests are not conclusive because the tests are not diagnostic, as is noted on all of the companies Web sites. Because risks are probabilistic by definition, it is very likely that consumers will receive results from these companies that do not comport with their knowledge of their own medical histories. However, one expert noted that the discrepancies between actual health and the predications made by these companies also serve to illustrate the lack of robustness of such predictive tests. Moreover, experts fear that consumers may misinterpret the test results because they do not understand such distinctions. For example, a consumer with a strong family history of heart disease may be falsely reassured by below-average risk predictions related to heart attacks and consequently make poor health choices. In fact, one expert told us that “family history is still by far the most consistent risk factor for common chronic conditions. The presence of family history increases the risk of disease regardless of genetic variants and the current genetic variants do not explain the familial clustering of diseases.” Another expert stated that “the most accurate way for these companies to predict disease risks would be for them to charge consumers $500 for DNA and family medical history information, throw out the DNA, and then make predictions based solely on the family history information.” Examples we identified include the following: Donor 2 has a family history of heart disease yet all four companies predicted that she was at average risk for having a heart attack. Donor 2 also has a family history of type 1 diabetes, but Company 3 reported that she was at below-average risk for the disease. Donor 3 has a family history of heart disease, but Companies 1, 2, and 3 reported that he was at average risk for having a heart attack and Company 4 reported he was at below-average risk. Donor 4 had a pacemaker implanted 13 years ago to treat atrial fibrillation. However, Company 1 and 2 found that he was at below- average risk for developing atrial fibrillation, and Companies 3 and 4 claimed that he was at average risk. Donor 4 is also a colon cancer survivor, but Company 2 reported that he was at average risk of developing the disease. Donor 5 has Type 2 diabetes, but Companies 1, 2, and 3 indicated that he had an average risk of developing the disease. Donor 5 is also overweight, but all four companies found him to be at average risk for obesity. In our post-test interviews, representatives from all four companies reiterated that their tests are not diagnostic, but they all believe that their tests provide consumers and their doctors with useful information. Specifically, Company 1 stressed that its tests empower consumers to recognize their risk of developing a health-related condition and then take the information to a doctor for further discussion. Company 2 emphasized that its tests provide consumers with the “incentive” to be “aggressive” about their health, while Company 3 said its goal is to “empower individuals with information to help them make necessary lifestyle changes.” Similarly, Company 4 stated that its risk predictions are a useful first step in that they offer “something for the consumer and their physician to consider in deciding whether or when to proceed with more invasive or costly tests.” However, experts we spoke with cautioned that most doctors are not adequately prepared to use DTC genetic test information to treat patients. In addition, experts noted that there is currently no data or other evidence to suggest that consumers have taken steps to improve their health as a result of taking DTC genetic tests. As one expert noted, “even if such information is found to be an especially effective motivator of behavioral change, we’re in trouble…because for everyone you find who is at increased disease risk, you’ll find another who is at decreased risk. So if this information is actually powerful in motivating behavior then it will also motivate undesirable behaviors in those found to be at low risk.” Fictitious profiles did not receive complete test results: Many of these studies the companies use to make risk predictions apply only to those of European ancestry. Consequently, our fictitious Asian and African American donors did not always receive risk predictions that were applicable to their race or ethnicity, although the companies either did not disclose these limitations prior to purchase or placed them in lengthy consent forms. The experts we spoke to agreed that these limitations should be “clearly disclosed upfront” and suggested that our fictitious donors try to get their money back. Companies 2 and 3 did give us a refund, but Company 1 refused and company 4 never responded to our request. In our post-test interviews, company representatives acknowledged that race and ethnicity do affect disease risk predictions, but that most genetic research has only been done on persons of European ancestry and therefore such individuals receive more accurate results. Representatives from Company 1 also said that the company can provide only current information and that one of its primary goals is to expand upon this research by collecting DNA from as many persons as possible. Further, Companies 2 and 4 stated that they believe they communicate this limitation to consumers on their Web sites or in their test result reports, though our observations do not support this claim. Examples of the discrepancies we identified include the following: Company 1 provided Donor 1’s fictitious African American profile with test results based on her race for just 1 of the 15 diseases we compared: type 2 diabetes. For the remaining diseases, Company 1 provided a risk prediction but included a disclaimer, such as “this result applies to people of European ancestry. We cannot yet compute more precise odds” for those of African American descent. However, Company 1 did not explicitly disclose the fact that African Americans would receive incomplete results prior to purchase, even though it did ask consumers to specify their ethnicity as part of the purchase process. The company only vaguely refers to any testing limitations on the first page of its consent form, which states that “gene/disease associations are typically based on ethnicity and the associations may not have been studied in many world populations and may not apply in the same or similar ways across populations.” Company 2 claimed on its Web site that it had “better coverage [of genes] associated with the most important diseases for all ethnicities” than its competitors. However, the company provided Donor 2’s fictitious Asian profile with test results for just 6 of the 15 diseases we compared. The company did not explain these discrepancies and did not disclose the testing limitations prior to purchase, even though it requested that consumers specify their race or ethnicity as part of the purchase process. The only references to these limitations are made in the “frequently asked questions” section and on page six of an eight- page service agreement, where the company notes that “the genetic result reported may in some cases only be applicable to a certain group of people, e.g. based on gender, ethnicity, lifestyle, family history etc. that you may or may not belong to.” Company 3 sent Donor 3’s fictitious African American profile results for just 3 of the 15 conditions we compared. The company did not disclose this limitation prior to purchase even though it requested that consumers specify their race or ethnicity during the purchase process. For 10 of the 15 conditions we compared, Company 4 sent all of our donors results that applied only to individuals of European ancestry. However, for restless leg syndrome, the predictions were accompanied by the following statement: “most conditions have only been studied in people of European ancestry. But this condition is a little different.” For atrial fibrillation, colon cancer, type 2 diabetes, and heart attack, the predictions were accompanied by the following statement “most conditions have only been studied in people of European ancestry, but this one also has been studied in other groups.” The company provided no additional explanation as to how these differences applied to our donors. The only other reference to testing limitations is made on page five of a nine- page consent form, where the company notes that “most of the published studies in this area of genetic research have focused on people of Western European descent. We do not know if, or to what extent, these results apply to people of other backgrounds.” Company 1 provided conflicting predictions for the same DNA within the same test result report: Company 1 provided our donors with conflicting risk predictions for atrial fibrillation, celiac disease, and obesity. In reviewing the test results for just the factual profiles, we observed the following: Donor 1 received a “clinical report” predicting that she had an average risk for developing atrial fibrillation and a “research report” stating that she was at below-average risk for the disease. Donor 2 received a “clinical report” stating that she was at below- average risk of developing celiac disease and a “research report” claiming that she was at above-average risk. Donor 4 received one “research report” claiming that was at above- average risk for obesity and another “research report” stated that he was at average risk. According to information in the test results, the company distinguishes between clinical and research reports by noting that predictions based on the clinical reports are for “conditions and traits for which there are genetic associations supported by multiple, large, peer-reviewed studies.” In contrast, the research reports provide information “that has not yet gained enough scientific consensus to be included in our clinical reports.” However, there is no additional information explaining how consumers should interpret the results. Because the company does not offer any follow-up consultations on test results, our fictitious donors could not request clarification. When we interviewed representatives from Company 1 about this issue after our testing, they simply reiterated the information contained in the results, describing research reports as being peer reviewed and “almost clinical” but noting that clinical reports are “four star” in that they are widely accepted according to scientific standards. Follow-up consultations provide only general information: As part of the test results, all four companies provide generally accepted health information related to the diseases that were tested, including a description of symptoms, treatments, and methods of prevention. This information is not targeted to specific consumers; all of our donors’ results contained the same descriptions of treatments and methods of prevention, regardless of the risk predictions they received. For example, all the companies note that stopping smoking and increasing exercise are ways to reduce the risk for heart attacks. Representatives for Company 4 also encouraged consumers to make dietary changes such as adopting a Mediterranean diet or eating curry to prevent Alzheimer’s disease, claims that cannot be proven, according to our experts. To supplement this information, Companies 2, 3, and 4 offer follow-up consultations. Only Company 4 has U.S. board-certified genetic counselors on staff for this purpose, but all three companies claimed on their Web sites that their representatives would help consumers understand the implications of their disease risk predictions. However, for the most part, these representatives provided our donors with little guidance beyond the information contained in the test reports; at times, it seemed as though they were simply reading information directly from these reports. When our donors asked for more information on alarming results that indicated that they were at increased risk for serious diseases like colon cancer and Alzheimer’s disease, representatives for Companies 2 and 3 pointed out symptoms to be aware of, but acknowledged that there is very little the donors could do to mitigate these risks. Representatives for Companies 2 and 4 also conceded that the donors’ own doctors would probably not know what to do with the test results, a fact that our experts repeatedly noted. Examples include the following: Company 2 offers follow up consultations with “experts” to help consumers “interpret their results.” In our post-test interviews, the company further noted that it provides the option of speaking with genetic counselors or a medical geneticist, but that consumers rarely exercise this option. Because the company is located outside the country, we were unable to determine whether all of its counselors are board certified in the United States; however, one counselor told us that he was not certified. During one of our undercover follow-up calls, Donor 4 asked what to do about his test results in general and what lifestyle changes he should make as a result. The representative told Donor 4 that he could not tell him what to do because he was not a physician and that the donor should take his results to a physician if he wanted advice on making any changes. When Donor 4 expressed concern that his doctor may not know what to do with the test results, the expert told him “True, not all physicians are familiar with these tests, so if you were to take it into a physician’s office, they may not be familiar with it.” Furthermore, when discussing Donor 3’s increased risk for colon cancer, one of Company 2’s experts told our donor that while he should become familiar with the symptoms such as blood in the stool, there was not much else he could do because “colon cancer is quite silent.” Company 3 states that “because of the complexity and inherent uncertainties in genetic information, we recommend that you discuss the results of your genetic test with a genetic professional….Our on- staff Genetic Counselors are available any time to review your…results with you.” In our post-test interviews, the company further claimed that its genetic counselors are certified by the American Board of Genetic Counseling and that the counselors review family history and provide consumers with additional information that is not in the test results. However, our donors spoke to the same person, who admitted that she was not a board-certified genetic counselor. She told us that she had completed her master’s in genetic counseling and just had to take her test to become licensed. Donor 5 called Company 3 because he was extremely concerned about the company’s prediction that he had genetic markers that are highly correlated with Alzheimer’s disease. Instead of providing addition information, the counselor simply acknowledged that “there is no cure or prevention strategy with Alzheimer’s.” Company 4 notes that its “genetic counselors are healthcare professionals who are trained to help you understand what genetic information means for you and for your family.” In our post-test interview, the company stressed that its counselors explain the results, discuss beneficial next steps, and ensure that consumers and their physicians understand the meaning and limitations of the tests. However, when Donor 2 asked what she could do about her test results, the counselor told her that she could take the results to a physician. When Donor 2 pressed the counselor about whether a doctor would know what to do, the counselor responded “With this stuff? Probably not, no, I think they’re learning just like everyone else.” Posing as consumers seeking information about genetic testing on the Internet and through phone calls and face-to-face meetings, we found that 10 of the 15 companies we investigated engaged in some form of fraudulent, deceptive, or otherwise questionable marketing practices. For example, at least four companies claimed that a consumer’s DNA could be used to create personalized supplements to cure diseases. One company’s representative fraudulently used endorsements from high-profile athletes to try to convince our undercover investigators to purchase its supplements. He also told our fictitious consumers that they could earn commission checks and receive free supplements if they could convince their friends to purchase the products. More detailed information on our experiences with this company follows table 2. Another flagrant example of deceptive marketing involved several companies’ claims that they could predict which sports children would excel in based on DNA analysis. We also found examples of highly misleading representations about the reliability of the tests and the ability of health care practitioners to use the results to help treat patients. In addition, two companies are placing consumers’ privacy at risk by condoning the potentially illegal practice of testing DNA without prior consent. Selected audio clips from our undercover calls and meetings are available at http://www.gao.gov/products/GAO-10-847T. Table 2 contains a selection of representations made by these companies. Note that companies 1 through 4 are the same companies we proactively tested, as discussed earlier in this testimony. Company 5: On its Web site, Company 5 claimed that it would use a consumer’s DNA to “create a personalized formula for nutritional supplements and skin repair serum with 100% active ingredients individually selected to enhance or diminish the biological processes causing you to age.” To investigate these claims, we posed as a fictitious consumer interested in purchasing the product and met in person with a company representative. During our initial meeting, the representative not only fraudulently suggested that Michael Phelps and representatives for Lance Armstrong endorsed the product, he also implied that the company’s supplements could cure high cholesterol and arthritis, claims that one of our experts characterized as “absolute lies.” Moreover, the FDA and the National Institutes of Health have clearly stated that no dietary supplement can treat, prevent, or cure any disease. As part of the company’s promotional materials we found that the company’s DNA assessment cost $225 and that the customized supplements cost about $145 per month. However, if our fictitious consumer immediately purchased a 3-month supply of supplements, she would be able to get the DNA test for free. The representative also told her that she could become a company affiliate and earn commission checks and free products by recruiting new affiliates. She, along with another fictitious consumer, subsequently registered as company affiliates, and ultimately received commission checks totaling more than $250. In addition to sending us the test kits, the company sent us packages of starter supplements in a bag that was not labeled with an ingredient list. In an attempt to compare the test results from Company 5 with the results we received from Companies 1 through 4, we again used the same five donors and replicated the same methodology: submitting DNA samples using one factual profile and one fictitious profile. However, when we received the results, we found that Company 5 did not provide a set of risk predictions for specific diseases, making it impossible for us to compare the results against those we received from the other four companies. Instead, the company sent our donors a list of gene variants tested, a description of bodily functions affected by those variants, and a determination of whether the donors needed additional “nutritional support” to maintain health. In comparing the results, we found that each donor appeared to have a unique assessment and that using the fictitious profile did not seem to affect the results. However, the results were so ambiguous and confusing that they did not provide meaningful information. For example: Donor 1 was told that she needed “maximum support” to maintain the “VDR gene” which accounts for “75% of the entire genetic influence on bone density” among healthy people. Maximum support means that the “protein molecule expressing a specific enzyme, hormone, cytokine or structural protein is functioning minimally” and maximum nutritional support is needed to keep the body functioning optimally. Donor 5 was told that he needed “added support” to maintain the “EPHX” gene, which “detoxifies” epoxides or “highly reactive foreign chemicals present in cigarette smoke, car exhaust, charcoal-grilled meat, smoke from wood burning, pesticides, and alcohol.” “Added support” means that the gene is functioning less than optimally and therefore needs added nutritional support. According to one of the experts we spoke with, these claims are simply “nonsensical” and “while it is true that one can find alleles of many of these genes that don’t have the same activity as ‘normal,’ we have no idea of (a) whether that reduced activity has any real health implications and (b) what one would reasonably do about it if so.” Along with the test results, the company sent supplements that it claimed were “blended” based on our donors’ DNA assessments. The supplements arrived in the same type of unlabeled bag as the starter supplements. This time, the ingredients were printed inside the test result booklet sent to each donor and included substances such as raspberry juice powder, green tea extract, and garlic powder. The recommended daily dose is 10 supplements per day. Based on a review of all the ingredient lists, our five donors appeared to get supplements with different combinations of substances. However, we did not test the supplements to verify their contents. Moreover, an expert we spoke with told us that there is no scientific basis for claiming that supplements can be customized to DNA. In post-test interviews, Company 5 told us that this company differs from others in that it does not attempt to diagnose or calculate a predisposition to any disease. Instead, the company said that it focuses on the overall health and well-being of their clients by creating personalized nutritional supplements based on their client’s specific DNA. When we asked about the ingredients in the supplements, the company told us that all supplements have a base formula of ingredients that their scientists have determined to be “beneficial for everyone.” Additional nutrients are then added to the base formula based on deficiencies identified by the company’s DNA test. When we asked about the endorsements, we were told that several celebrities and professional athletes use the company’s products, but that many of these high-profile clients do not want to disclose this affiliation. We briefed FDA, the National Institutes of Health, and FTC on our findings on May 25, 2010; June 7, 2010; and June 17, 2010, respectively. In addition, we have referred all the companies we investigated to FDA and FTC for appropriate action. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For additional information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. The following individuals made key contributions to this testimony: Jennifer Costello and Andrew O’Connell, Assistant Directors; Eric Eskew; Grant Fleming; Christine Hodakievic; Barbara Lewis; Vicki McClure; Ramon Rodriguez; Anthony Salvemini; Barry Shillito; Tim Walker; John Wilbur; and Emily Wold. This appendix provides (1) a description of both the factual and fictitious profiles used by each donor and (2) tables documenting the risk predictions we received from all four companies for the 15 diseases we compared. To the extent possible, we have used in the risk prediction language directly from the test results. However, Company 2 did not use terms like “average” or “below average” to describe risk. Instead the company used charts showing each consumer’s risk level as compared to others with the consumer’s gender and ethnicity or as compared to those of European ancestry. The results were color coded, with green to light green appearing to correspond to a below-average risk level, yellow corresponding to an average risk level, and orange and red corresponding to an above-average risk level. To facilitate comparison, we chose to use these corresponding terms to describe the results, as shown in the table. In addition, Company 1 used two different types of reports in its test results: clinical and research. According to the company, the clinical reports contain “information about conditions and traits for which there are genetic associations supported by multiple, large, peer-reviewed studies.” Research reports contain “information from research that has not yet gained enough scientific consensus to be included in our clinical reports.” Where applicable, we noted when a risk prediction was derived from a research report; all the other predictions were derived from the clinical reports. Donor 1: Donor 1 is a 37-year old Caucasian female, who eats a balanced diet and exercises regularly. She has elevated cholesterol and arthritis in her back. In addition, she has a strong family history of colon cancer and a grandparent who was diagnosed with dementia. In Donor 1’s fictitious profile, she is a 68-year old, African American female, who is overweight and rarely exercises. She has type 2 diabetes, hypertension, and asthma, but has no family history of colon cancer or dementia. Donor 2: Donor 2 is a 41-year-old Caucasian female. She is in good health; however she has a family history of breast cancer, type 1 diabetes, and heart disease. In Donor 2’s fictitious profile, she is a 19-year-old Asian female who smokes, drinks and uses recreational drugs. She suffers from heart arrhythmias and an elevated resting heart rate, but has no family history of breast cancer or diabetes. Donor 3: Donor 3 is a 48-year-old Caucasian male who has never smoked and rarely drinks. The donor has asthma as well as a family history of heart disease. In Donor 3’s fictitious profile, he is a 69-year-old African American male who is overweight, smokes, and is in somewhat poor health. He has a family history of bone and lung cancer, but no history of asthma or heart disease. Donor 4: Donor 4 is a 61-year-old Caucasian male who smokes. The donor has elevated cholesterol, has an elevated resting heart rate, and has had colon cancer. Thirteen years ago, the donor had a pacemaker implanted to treat atrial fibrillation. In Donor 4’s fictitious profile, he is a 53-year-old Caucasian male who has never smoked. He has hypertension and prostate cancer but has no family history of colon cancer or atrial fibrillation. Donor 5: Donor 5 is a 63-year-old Caucasian male who eats a balanced diet and exercises. He has elevated cholesterol and blood sugar. The donor suffers from type 2diabetes and is obese. He also has a family history of Alzheimer’s disease. In Donor 5’s fictitious profile, he is a 29-year-old Hispanic male who chews tobacco and suffers from asthma. However, he has no family history of diabetes or Alzheimer’s disease.
In 2006, GAO investigated companies selling direct-to-consumer (DTC) genetic tests and testified that these companies made medically unproven disease predictions. Although new companies have since been touted as being more reputable--Time named one company's test 2008's "invention of the year"--experts remain concerned that the test results mislead consumers. GAO was asked to investigate DTC genetic tests currently on the market and the advertising methods used to sell these tests. GAO purchased 10 tests each from four companies, for $299 to $999 per test. GAO then selected five donors and sent two DNA samples from each donor to each company: one using factual information about the donor and one using fictitious information, such as incorrect age and race or ethnicity. After comparing risk predictions that the donors received for 15 diseases, GAO made undercover calls to the companies seeking health advice. GAO did not conduct a scientific study but instead documented observations that could be made by any consumer. To assess whether the tests provided any medically useful information, GAO consulted with genetics experts. GAO also interviewed representatives from each company. To investigate advertising methods, GAO made undercover contact with 15 DTC companies, including the 4 tested, and asked about supplement sales, test reliability, and privacy policies. GAO again consulted with experts about the veracity of the claims. GAO's fictitious consumers received test results that are misleading and of little or no practical use. For example, GAO's donors often received disease risk predictions that varied across the four companies, indicating that identical DNA samples yield contradictory results. One donor was told that he was at below-average, average, and above-average risk for prostate cancer and hypertension. GAO's donors also received DNA-based disease predictions that conflicted with their actual medical conditions--one donor who had a pacemaker implanted 13 years ago to treat an irregular heartbeat was told that he was at decreased risk for developing such a condition. Also, none of the companies could provide GAO's fictitious African American and Asian donors with complete test results, but did not explicitly disclose this limitation prior to purchase. Further, follow-up consultations offered by three of the companies failed to provide the expert advice that the companies promised. In post-test interviews with GAO, each of the companies claimed that its results were more accurate than the others'. Although the experts GAO spoke with believe that these tests show promise for the future, they agreed that consumers should not rely on any of the results at this time. As one expert said, "the fact that different companies, using the same samples, predict different directions of risk is telling and is important. It shows that we are nowhere near really being able to interpret [such tests]." GAO also found 10 egregious examples of deceptive marketing, including claims made by four companies that a consumer's DNA could be used to create personalized supplement to cure diseases. Two of these companies further stated that their supplements could "repair damaged DNA" or cure disease, even though experts confirmed there is no scientific basis for such claims. One company representative even fraudulently used endorsements from high-profile athletes to convince GAO's fictitious consumer to purchase such supplements. Two other companies asserted that they could predict in which sports children would excel based on DNA analysis, claims that an expert characterized as "complete garbage." Further, two companies told GAO's fictitious consumer that she could secretly test her fiance's DNA to "surprise" him with test results--though this practice is restricted in 33 states. Perhaps most disturbing, one company told a donor that an above average risk prediction for breast cancer meant she was "in the high risk of pretty much getting" the disease, a statement that experts found to be "horrifying" because it implies the test is diagnostic. To hear clips of undercover contacts, see http://www.gao.gov/products/GAO-10-847T . GAO has referred all the companies it investigated to the Food and Drug Administration and Federal Trade Commission for appropriate action.
The focus of patent examination is to determine whether the invention in a patent application satisfies the legal requirements for a patent, including that the invention be novel and not obvious. As shown in figure 1, USPTO’s patent examination process involves a variety of steps, at least one of which includes prior art searches that examiners use to determine whether an invention is novel and not obvious. USPTO’s manual for patent examiners establishes certain requirements and processes that examiners must follow in examining patent applications, including performing prior art searches. There are generally two types of prior art—patent literature or nonpatent literature. Patent literature consists of previously issued U.S. or foreign patents and published patent applications. Nonpatent literature consists of other publicly available documents and can include such things as product manuals, standards established by international organizations, textbooks, periodicals, or conference presentations. Both patent and nonpatent literature may be written in a language other than English, referred to as foreign-language art in this report. USPTO’s manual for patent examiners requires them to conduct a thorough prior art search and directs them to consider U.S. patents, foreign patents, and nonpatent literature unless they can justify with reasonable certainty that no more pertinent prior art references can be found. Examiners are expected to complete their examination of an application in a certain number of hours. The time allotted varies depending on factors such as the technology and the seniority of the examiner. For example, an examiner reviewing an application related to wire fabrics and structure may be allotted about 14 hours for examination, while an examiner at the same experience level would be allotted about 32 hours for an application related to data processing, such as database and file management. The allotted time includes the time needed to review the application, perform a search for prior art, and complete all office actions. Examiners have minimum production goals, based on the time allotted, for the number of office actions they must complete, and examiners may earn bonuses for exceeding these minimum production goals. USPTO uses several different information technology systems to assist examiners in conducting prior art searches. For example, to search U.S. patent literature, examiners use two systems, the Examiner’s Automated Search Tool (EAST) and the Web-Based Examiner Search Tool (WEST), to search the full texts of published patent applications since 2001, patents granted since 1970, and optically scanned U.S. patents granted from 1920 through 1970. These systems also include abstracts of some foreign patents. Examiners may access additional foreign patent documents through other web-based tools. In addition, USPTO’s Scientific and Technical Information Center (STIC) operates systems that can search U.S. patent literature and that store some nonpatent literature. STIC also provides subscriptions to various web-based sources of nonpatent literature. According to a USPTO document, the agency had subscriptions to 119 different journals or external databases in 2014. USPTO’s current search tools do not provide examiners with immediate access to computer-generated translations, known as machine translations; however, examiners can request human and machine translation services from STIC and three contracted translation vendors that currently cover 35 languages. As of May 2015, USPTO had nearly 8,300 patent examiners across the eight technology centers we reviewed (see fig. 2). USPTO uses the General Schedule (GS) classification system for patent examiners, whose levels range from GS-5 to GS-15. Examiners at the GS-14 level or above (44 percent of the examiners in the technology centers we reviewed, as shown in fig. 3) are generally primary examiners. Primary examiners may accept or reject a patent application without additional review. This level of authority is in contrast to junior examiners—most examiners below the GS-14 level—whose work must first be reviewed by a primary examiner before an office action can be sent to the applicant. At the GS-13 level, some examiners are in the process of becoming primary examiners. Supervisory patent examiners are at the GS-15 level and are responsible for the day-to-day management of examiners. In addition to applying for a patent from USPTO, an inventor may also seek patent protection in other countries for the same invention by filing in multiple patent offices. Such interrelated patent applications are described as a patent family. The World Intellectual Property Organization has estimated that approximately half of all applications worldwide are repetitive filings in a patent family, and the rest are initial filings. According to the World Intellectual Property Organization, around 2.7 million patent applications were filed worldwide in 2014, of which 2.2 million applications were filed with patent offices in the United States, China, Japan, South Korea, and Europe—known as the IP5. These five offices, including EPO and JPO, each receive hundreds of thousands of patent applications each year. EPO issues patents that cover 42 countries, most of which are member countries that are party to the European Patent Convention. Applicants may apply to national patent offices or apply through EPO for coverage in some or all of these 42 countries. EPO has three official languages, one of which must be used for processing an application. These three languages are English (about 80 percent of applications), French (about 5 percent), and German (about 15 percent), according to EPO officials. EPO and JPO have about one- half and one-fifth as many examiners as USPTO, respectively. According to JPO officials, there is a limit on the number of federal employees JPO can have; therefore, 494 of its 1,702 examiners are fixed-term rather than permanent employees, and JPO has begun outsourcing aspects of prior art search. JPO officials stated that outsourcing some aspects of prior art searches based on instructions from a government examiner frees up the examiners’ time and allows the office to review more applications. Table 1 describes the workload and workforce of USPTO, EPO, and JPO. USPTO examiners face a variety of challenges in identifying relevant prior art during patent examination. As shown in figure 4, the experts we interviewed and examiners we surveyed cited challenges related to certain attributes of prior art and patent applications and USPTO examination policies, search tools, and human capital management. Our survey results show the extent of the challenges may vary by technology center or examiners’ GS level (see app. III). Because we surveyed a generalizable stratified random sample of examiners and adjusted for nonresponse, our results provide estimates for the entire population of examiners in our study and, when reported by technology center, for each of the technology centers we reviewed. Several attributes of prior art and patent applications present challenges for examiners in identifying relevant prior art, including the quantity of prior art, amount and relevance of prior art cited by applicants, availability of prior art, and clarity of patent applications. Quantity of prior art. The large volume of prior art available from multiple sources makes searching for relevant prior art challenging, according to most experts we interviewed as well as examiners responding to our survey. For example, one expert noted that the amount of patents, publications, and other nonpatent literature has grown exponentially, making it harder to find relevant prior art in the time allotted. Another expert said that technological innovations are occurring at a tremendous rate, and that the growing volume of prior art domestically and worldwide can be overwhelming. Based on our survey, we estimate that 45 percent of all examiners in the eight technology centers we reviewed find that the large quantity of art makes it somewhat or much more difficult to complete a thorough prior art search in the time allotted, while fewer examiners—34 percent—find that the quantity makes it somewhat or much easier. However, responses varied among technology centers, as shown in appendix III. For example, 30 percent of examiners in the Computer Networks, Multiplex Communication, Video Distribution, and Security technology center find that the quantity of art makes it somewhat or much more difficult to complete a thorough prior art search in the time allotted, compared to 60 percent of examiners in the Mechanical Engineering, Manufacturing, and Products technology center and 60 percent of examiners in the Chemical and Materials Engineering technology center. Amount and relevance of prior art cited by applicants. Examiners we surveyed reported difficulties with the amount and relevance of prior art references provided by applicants. USPTO requires applicants and others assisting in filing an application to submit all information known to be material to patentability. This information may include search results from foreign patent offices or publications known to the individual. According to USPTO policy, examiners will consider this information when reviewing a patent application, which may require reviewing numerous prior art references submitted by the applicant. Based on our survey, we estimate that 82 percent of examiners sometimes, often, or always encountered applications with what they considered an excessive number of submitted art references in the past quarter. We estimate that for most examiners (64 percent), excessive references make it somewhat or much more difficult to complete a thorough prior art search in the time allotted. Considering all of the prior art references submitted by applicants can be particularly challenging for examiners because applicants are generally not required to explain the relevance of the references or to point examiners to the particular portions of references that are relevant. In commenting on this issue in our survey, one examiner recalled often receiving information disclosure statements from applicants with numerous prior art references, of which only a handful were relevant. Based on our survey, we estimate that 88 percent of examiners sometimes, often, or always encountered applications with irrelevant references in the past quarter. Moreover, most examiners (57 percent) find that irrelevant references make it somewhat or much more difficult to complete a thorough prior art search in the time allotted. In contrast, 87 percent of examiners find that an application with relevant references makes it somewhat or much easier to complete thorough prior art searches in the time allotted. Availability of prior art. The availability of prior art and difficulties obtaining certain types of prior art are also challenges, according to most experts we interviewed as well as examiners we surveyed. For example, some relevant prior art may require a fee to access, may not be in a text- searchable format, may not be in a database, or may otherwise be difficult to access. In particular, experts told us that certain general types of prior art are more difficult to find, such as nonpatent literature overall and software-related prior art (most experts) and foreign-language prior art (6 of the 18 experts). Specific types of nonpatent literature that experts identified as difficult to find include product documentation and summaries, product databases, and user manuals; offers for sale and public use; and information from standards-setting organizations. Prior art related to computer technologies has also been difficult to find, according to one expert we interviewed, in part because a lot of information is commonly known in this field but may not be found by searching a public database. In addition, one expert we interviewed suggested that examiners do not have ready access to textbooks that are a good source of prior art. Another suggested that prior art from before the mid-1970s is difficult to find because patents issued before then have not been fully digitized. This expert stated that this is a particularly challenging issue for examiners in the mechanical technology centers because those examiners tend to use older art more often than other examiners. USPTO examiners we surveyed also reported difficulties obtaining relevant prior art from searches for certain types of prior art more than other types. In particular, on the basis of our survey, we estimate that 51 percent of all examiners find it somewhat or very difficult to obtain relevant art from searches for foreign-language nonpatent literature. Difficulties obtaining certain types of prior art may influence how often patent examiners search for them. For example, 8 of the 18 experts we interviewed suggested that examiners focus on searching patent literature and may not thoroughly search nonpatent literature. Similarly, our survey results in table 2 show that nearly all examiners always or often search for U.S. patents and applications (an estimated 99 percent); we also found that nearly all examiners always or often view this as the most relevant type of art they consider (an estimated 98 percent of examiners). In contrast, we estimate that 67 percent of examiners always or often search for foreign patents, and 20 percent of examiners always or often search for foreign-language nonpatent literature. More examiners also find that foreign patents and foreign-language nonpatent literature are difficult to obtain, compared to those that find U.S. patent literature difficult to obtain. In analyzing our survey results, we found that the difficulty examiners ascribed to finding foreign patent literature and foreign-language nonpatent literature was statistically associated with how often they reported searching for these types of prior art. How often examiners search for certain types of prior art, and how difficult examiners find those searches, also varies by technology center (see app. III). Clarity of patent applications. According to most of the experts we interviewed and examiners we surveyed, the clarity of applications can pose a challenge to finding relevant prior art. For example, as one expert noted, there often are no standard terms to describe technologies, and different applications may use different terms to describe the same thing. According to four of the experts we interviewed, the absence of standard terms is particularly a challenge for software-related applications. Inconsistent terminology can make it more difficult for examiners to find relevant prior art because searching for one term using keyword searches—one common method of searching for prior art described by USPTO officials—will not identify documents that use a different term. As shown in table 3, examiners reported that issues with the clarity of the application make it more difficult to complete thorough prior art searches in the time allotted. In addition, based on comments examiners made in our survey, examiners may face difficulties associated with applications that have been translated from a foreign language. For example, one examiner stated that “translation quality is often poor, and claims routinely contain non-standard industry terms. Issued patents and publications containing these non-standard terms also make searching in foreign collections exceptionally challenging because it is not possible to anticipate which synonyms to use.” In our patent quality report (GAO-16-490), we provide additional information on how the clarity of applications affects patent quality. The effect of unclear applications may be exacerbated by USPTO’s practice that examiners should attempt to identify all applicable grounds for rejecting a claim or claims during their first review—a practice called compact prosecution. According to USPTO’s manual for patent examiners, this practice aims to avoid unnecessary delays. Such delays could be caused by examiners waiting to continue examining an application until previously identified issues are resolved. However, the practice of compact prosecution may discourage examiners from resolving any issues of clarity or ambiguity before conducting their initial searches for relevant prior art. For example, as one examiner commented in our survey, in order to follow the compact prosecution practice, an examiner must guess what unclear claims mean in order to search for prior art related to the claims. If examiners do not have a clear understanding of the scope and claims of the invention, they may not choose the most appropriate keywords for conducting their prior art searches, which may lead examiners to miss relevant prior art that could be found with more relevant keywords. Several aspects of USPTO’s patent examination policies, prior art search tools, and human capital management present challenges for examiners in identifying relevant prior art, including the time pressures examiners experience for prior art searches, USPTO search tools and capabilities, the misclassification of patent applications, and examiners’ technical competence. Time pressures for prior art searches. According to most of the experts we interviewed and examiners we surveyed, time pressures may reduce examiners’ ability to conduct thorough prior art searches. These pressures relate to USPTO’s system for allotting an expected amount of time for examiners to complete an examination. For example, one expert noted that the amount of time examiners are allotted decreases as they become more experienced, and this may lead more senior examiners to increasingly rely on art they know well instead of searching for new art. As figure 5 shows, we estimate on the basis of our survey that 67 percent of examiners find they have somewhat or much less time than needed to complete thorough prior art searches given a typical workload. Our survey also found that examiners’ perception of the sufficiency of time for completing thorough prior art searches varies by technology center (see app. III). For example, an estimated 37 percent of examiners in the Mechanical Engineering, Manufacturing, and Products technology center reported having much less time than needed to complete a thorough prior art search, compared to 20 percent of examiners in the Biotechnology and Organic Chemistry technology center. In analyzing our survey results, we found that how often examiners searched for foreign patent literature, scientific articles or presentations, or foreign-language nonpatent literature was statistically associated with their description of the sufficiency of time they had to complete a thorough prior art search. Further, we asked examiners about overtime worked to meet their minimum production goals. A majority of examiners (an estimated 72 percent) worked voluntary/uncompensated overtime in the past 6 months to meet their goals, as shown in figure 6. An estimated 30 percent of examiners worked an average of more than 10 hours of voluntary/uncompensated overtime per biweekly period, although examiners’ overtime varied by GS level (see app. III). However, on the basis of our survey, an estimated 56 percent of examiners experience no pressure to work overtime. In a 2007 report on USPTO’s efforts to hire and retain an adequate workforce, we found that an estimated 70 percent of examiners worked voluntary/uncompensated overtime in the previous year. Our patent quality report (GAO-16-490) provides additional information on how time pressures may affect patent quality. Search tools and capabilities. Experts we interviewed and examiners we surveyed had mixed opinions on USPTO’s search tools and capabilities. Of the 18 experts, 10 agreed that the search tools and capabilities available to examiners create challenges to conducting thorough prior art searches. In general, these experts characterized USPTO’s search tools as being less advanced and more heavily reliant on keyword searching than other available tools. For example, one expert said that examiners’ electronic prior art searches predominantly rely on using keywords, and this is a problem if the examiners do not know the most appropriate keywords. Another expert suggested that USPTO needs more comprehensive databases of prior art. Based on our survey, examiners generally find that the search tools available to them from USPTO and from third parties make it easier to complete prior art searches, but they also find that other tools would help. Specifically, we estimate that a majority of examiners agree that certain search tools not currently available would make prior art searches somewhat or much easier, including a search engine that (1) can automatically search for concepts and synonyms related to the search terms entered by the examiner (an estimated 76 percent of examiners), and (2) automatically generates relevant art, without keyword entry, based on all the claims in an application (69 percent) or an application’s specification (69 percent). In addition, a group of four supervisory patent examiners we interviewed said that it would be more efficient to search for prior art in one tool, with a single search method that covered multiple sources of prior art, including nonpatent literature. Currently, relevant nonpatent literature may appear in different journals or databases that cannot be searched with a single search function. Similarly, examiners commented in our survey about the difficulty of searching nonpatent literature or requested an easier way to search for it. For example, examiners requested a method for searching nonpatent literature that is integrated with their systems for searching patent literature. In some cases, USPTO examiners may not be able to use public search engines because searching for specific terms or concepts from an unpublished application may put the confidentiality of the application at risk. Examiners also requested improvements to USPTO’s current translation capabilities. On the basis of our survey, a majority of examiners (an estimated 86 percent) agree that having access to immediate machine translation of foreign-language documents would make it somewhat or much easier to complete a thorough prior art search in the time allotted, while about half of examiners (an estimated 50 percent) find that additional translators would make it somewhat or much easier. Misclassification of patent applications. According to examiners, another challenge they face is misclassification of patent applications. Misclassification occurs when an application is not classified into the group most closely associated with the invention. This can result in applications being routed to the wrong USPTO technology center or art unit for examination. Although only 2 of the 18 experts we interviewed raised concerns about the misclassification of patent applications, on the basis of our survey, an estimated 75 percent of examiners encountered misclassified applications sometimes, often, or always in the past quarter, and an estimated 76 percent of examiners find that the misclassification of patent applications makes it somewhat or much more difficult to complete a thorough prior art search in the time allotted. According to our survey, how often examiners encountered misclassified applications in the past quarter varied by technology center (see app. III). Misclassified applications pose particular challenges for examiners, according to examiners’ comments to our survey. For example, while examiners receiving misclassified applications can request that an application be transferred to a different art unit, they may encounter difficulties doing so, according to examiners’ comments in our survey. Misclassified applications that are not transferred can negatively affect examiners’ confidence in their work and the quality of the examination, based on examiners’ comments in our survey. This is because when applications are routed to the wrong art areas, they may be reviewed by examiners who do not have appropriate knowledge and experience to understand the invention or relevant prior art and determine appropriate search strategies and terms. Additionally, misclassified applications that are never corrected could be difficult for examiners to subsequently use as prior art in later examinations. This is because misclassified applications may be difficult to find when examiners search for patent literature using technology-specific patent classification categories. Ensuring examiners’ technical competence. According to most experts we interviewed, ensuring that examiners have sufficient and appropriate technical backgrounds, knowledge, or skills for conducting thorough prior art searches is also a challenge. The role of patent examiner is a difficult one that can take years to learn, and examiners with less education or work experience, or who are not abreast of advances in a particular technology area, may not have the technical knowledge necessary to identify relevant prior art. According to USPTO officials, an examiner’s technical knowledge enables him or her to understand the invention being searched, and if the examiner does not understand the invention, he or she may not know what to search for, where to look, and when to stop searching. In addition, we estimate on the basis of our survey that 82 percent of examiners find it somewhat or much easier to complete a thorough prior art search in the time allotted for applications with a subject matter in which they have knowledge of existing prior art based on their education or previous work experience. As of May 2015, approximately 39 percent of all examiners in the technology centers we reviewed had been at the agency for less than 5 years, and USPTO has historically faced challenges in retaining examiners. In addition, as of September 2015, a majority of examiners in the technology centers we reviewed (61 percent) did not have a degree beyond a bachelor’s degree when hired. USPTO officials told us that the agency has aimed to match new hires’ previous work experiences and educational backgrounds to technology centers. When we asked USPTO how often examiners in each technology center have technical work experience or education relevant to their art unit, agency officials answered “always” for five centers, “often” for three centers, and “sometimes” for one center. However, our survey found that in the past quarter, less than half of examiners—an estimated 42 percent—always or often encountered applications with a subject matter in which they have knowledge of existing prior art based on their education or previous work experience. The training examiners receive—particularly continuing training in evolving technologies—may also affect their ability to maintain the technical competence they need to effectively identify relevant prior art. Overall, an estimated 60 percent of examiners found that the continuing education they received from USPTO in their art area in the past year was at least somewhat useful. However, 22 percent of examiners had not taken such training or had not been offered such training, with responses by technology center ranging from an estimated 10 percent to 34 percent of examiners (see app. III). We found in 2005 that examiners were reluctant to attend voluntary training given the time demands involved. Similarly, according to the group of five examiners we interviewed in 2015, USPTO did not offer enough ongoing technical training and did not always give examiners sufficient time to complete training. EPO and JPO use several approaches that may help their examiners address challenges in identifying prior art similar to those that experts and respondents to our survey of patent examiners cited. These approaches include worksharing, creating internal databases of nonpatent literature, using patent classification systems, adopting advanced search tools, hiring and training of examiners to promote technical expertise, and incorporating review and audit procedures. Worksharing. EPO and JPO have systems to share search and examination results. The exchange of prior art searches or patent examination results among patent offices—referred to as worksharing— facilitates more efficient prior art searches for applications filed in multiple countries, according to foreign officials. As we discussed above, approximately half of all patent applications worldwide are part of patent families filed in multiple offices. To gain patent protection in multiple countries, inventors may file applications for the same invention in several different offices. Because the underlying invention is the same, the search and examination results at one office may be useful to examiners in another office. Therefore, worksharing can ameliorate the challenges posed by the quantity of potential prior art that examiners must search and the time pressure examiners face by allowing patent offices to leverage others’ work and knowledge. The IP5, a multilateral forum of the five largest patent offices, identified worksharing as the main tool for addressing an increasing number of patent applications, while helping the patent offices conduct timely, quality examinations. JPO officials noted that worksharing systems allow more efficient access to foreign patent literature and help JPO examiners more easily see foreign examiners’ work. JPO examiners still have to review the relevant literature themselves, but access to foreign examination results may help JPO examiners identify relevant art or sources of prior art. An EPO official also noted that worksharing could give offices access to search reports of unpublished applications at other offices, and that these search reports may include prior art relevant to similar applications filed with EPO. Creating internal databases of nonpatent literature. EPO and JPO incorporate specific nonpatent literature resources into their offices’ main search tools, which may allow examiners to more efficiently search both patent and nonpatent literature by keyword, rather than searching multiple sources individually. Officials at these offices said that incorporating these resources into their main search tools allows examiners to consider a wide array of nonpatent literature sources. According to EPO officials, EPO’s primary search tool allows examiners to use a single interface to search through all of EPO’s internal databases as well as some external databases of prior art. In addition to creating an internal database of nonpatent literature documents to facilitate examiners’ searches, EPO also added descriptive metadata to index these documents to help examiners find the most relevant documents. EPO’s system also incorporates features that make it user-friendly—for example, grouping documents to avoid examiners facing dozens of copies of the same article from different sources. EPO officials stated that examiners find that these groupings enhance their ability to search through larger quantities of prior art. Overall, EPO officials identified their search engine and the large number of indexed prior art resources available within the search engine as strengths of the office in addressing the quantity of prior art available. EPO officials said that the office prefers to have internal collections of these resources so they may be integrated into its search tools, but some sources remain external and separate from EPO’s search systems. Because the cost of bringing sources into EPO’s search systems is high, the office needs a strong business case to justify the expense. According to these officials, publishers of academic journals and other nonpatent literature often raise subscription prices annually, so the office also must decide what to exclude, because there is simply too much nonpatent literature for the office to include everything. According to JPO officials, JPO has internal databases of both patent and nonpatent literature and also subscribes to external databases for nonpatent literature. JPO’s internal databases include JPO patent literature, some foreign patent literature, and selected nonpatent literature. JPO annually selects useful nonpatent literature to add to its database based on recommendations from an internal committee of examiners and officials. In addition, JPO uses commercial databases of nonpatent literature to augment its internal database and annually evaluates whether each external database is useful for prior art search. Using patent classification systems. EPO and JPO use classification systems to help examiners narrow their searches and find relevant patent literature even if the patent application uses inconsistent terms or is written in a foreign language. EPO and JPO classify applications and patents into groups that describe specific components of the invention. Examiners may then look at other applications and patents within the same group to find similar inventions. A system that has more groups— sometimes called classes—allows for finer distinctions, and prior art searches within a class will produce fewer, more focused results. Reducing the number of results an examiner needs to consider decreases the time needed for the prior art search, thereby improving efficiency. Furthermore, by grouping similar patents into classes that describe details of the inventions, classification systems allow examiners to conduct prior art searches based on the classes assigned to an application, independent of its language and wording. This can improve search quality by producing results that would not be found from a specific keyword search but are nonetheless closely related. According to EPO officials, this can be particularly helpful for finding documents in Asian languages that are otherwise difficult for EPO examiners to search, and it facilitates searches that are less dependent on the examiner’s choice of keywords. Adopting advanced search tools. EPO and JPO officials told us that examiners in both offices primarily use classification and keyword searches instead of more complex search tools. However, EPO uses some automated tools that assist in keyword-based searches or provide automated search results. One EPO tool provides recommended search terms in its three official languages to examiners, based on a database of examiners’ past searches. This tool can improve search efficiency by providing additional search terms, such as synonyms or translations, that the examiner may not have initially considered. Another EPO tool provides examiners with the results of an automated search based on the concepts and words drawn from the claims in applications. The results are in the form of a ranked list of potentially related documents, which provides a starting point for examiners before they begin a manual search. Hiring and training examiners to promote technical expertise. EPO and JPO hire examiners with technical expertise and provide several years of training, according to officials from these offices. EPO only recruits students with at least a master’s degree and almost all JPO examiners join the office after graduate school, according to EPO and JPO officials. In addition, JPO’s outsourced aspects of prior art searches are often carried out by retired engineers or technical experts, and according to JPO officials, their experience may enhance the quality of patent application examinations. At EPO, examiners must also meet a requirement that they understand English, German, and French. EPO examiners spend their first 2 years in training, alternating periods of classroom training with periods of on-the-job training. JPO examiners spend their first 2 to 4 years (depending on their academic degree and work experience) as assistant examiners and receive practical training by a supervising examiner. Thereafter, examiners in both offices have opportunities for additional technical training. According to EPO officials, EPO examiners need 3 to 4 years before they are fully trained, and 4 to 5 years in certain fields, such as biotechnology. At both offices, examiners also tend to spend their entire career with the office. JPO officials identified the longevity of examiners as a strength, stating that examiners are well trained and have experience that helps ensure examination consistency. Incorporating review and audit procedures. EPO and JPO have procedures to review examiners’ work before issuing or denying a patent. At EPO, applications are examined by a panel of three examiners: a first examiner, second examiner, and chair. According to EPO officials, the first examiner generally performs the bulk of the search and administrative tasks, with the other two examiners actively discussing and approving office actions. Examiners from a particular technology area are assigned to each panel randomly. An individual examiner may serve as first examiner on some applications and second examiner or chair on others. EPO’s quality management system also calls for random audits, including a minimum of two prior art search audits for each examiner per year, and chairs of the examination committees are required to record data on the quality of their examinations. JPO officials also reported taking steps to ensure that the patents it issues are of high quality. These steps include a director quality check, consultations with other examiners, and quality audits of a sample of examinations. According to JPO’s quality manual, JPO directors conduct a quality check on examiners’ decisions to grant or reject applications before the office actions are sent to the applicants. The manual also encourages examiners to consult with their directors or other examiners who may be able to provide guidance in examining an application. In 2014, examiners recorded approximately 83,000 consultations and received approximately 243,000 new applications, according to JPO officials. Experienced managers or examiners who serve as quality management officers also perform quality audits on randomly selected applications. According to JPO officials, the office has around 90 such officers in particular technology areas who review entire examinations and complete independent searches, and 4 wide-area officers who focus on the appropriateness of examiners’ decisions but do not conduct additional searches. According to these officials, while the number of audits is not large enough to allow tracking of individual examiner performance, it allows for statistical monitoring of examination quality overall. USPTO has taken or begun planning various actions that may help address challenges in identifying relevant prior art, but some of these actions have limitations that may hinder their effectiveness. USPTO’s actions span the following areas: (1) leveraging the work of foreign patent offices, (2) encouraging submission of prior art from third parties, (3) improving prior art search tools, (4) monitoring examiners’ prior art searches, (5) evaluating the agency’s system for determining the amount of time examiners are allotted to examine patent applications, and (6) strengthening the technical competence of the examiner corps. These actions may help address challenges related to prior art and applications and to examination policies, search tools, and human capital management. In some cases, these actions are coordinated with, similar to, or could be informed by approaches taken by EPO or JPO. USPTO has taken actions to leverage the work of foreign patent offices just as EPO and JPO have done, but the agency has experienced challenges working with EPO on a new patent classification system. USPTO is collaborating with foreign patent offices to give examiners access to examination files from those offices. Specifically, USPTO has (1) contributed data to information technology systems that share published information among patent offices; (2) engaged in pilot programs to collaborate with other offices during search and examination of related patent applications; and (3) adopted a new, joint classification system with EPO. These efforts may help address several of the challenges we identified relating to prior art and applications, including the large quantity of art available and the availability of prior art, particularly foreign patents and art written in foreign languages. First, USPTO contributes data to three worksharing systems that make applications, prior art citations, and examination results from foreign patent offices available to examiners and the public. One system, the Common Citation Document, provides bibliographic information, such as title and source, of patent and nonpatent prior art citations. USPTO, EPO, and JPO launched this system in 2011, and have since expanded it to include information from additional patent offices. Another system, Global Dossier, provides access to the examination history of applications, such as office actions and other correspondence. USPTO examiners gained access to Global Dossier in 2015. USPTO officials described Global Dossier as a system that allows examiners to quickly and easily view office actions from participating foreign patent offices. EPO officials described these two systems as complimentary, with each system providing advantages based on the specific search task involved. A third system, PATENTSCOPE, run by the World Intellectual Property Organization, provides access to applications filed with multiple offices under the Patent Cooperation Treaty as well as patents granted by several regional and national offices. By allowing examiners to review prior art that examiners from other offices found useful in examining patent applications, these systems may help address the challenges related to managing the large quantity of prior art and accessing prior art that would otherwise be difficult to obtain. Second, to enhance collaboration on related patent applications filed in multiple countries, USPTO has entered into two pilot programs to jointly examine certain applications: one with JPO and one with the Korean Intellectual Property Office. Applicants who file related patent applications with USPTO and either JPO or the Korean Intellectual Property Office may request to enter the pilot—known as the Collaborative Search Pilot Program—and receive expedited review of their applications. In the pilot with JPO, USPTO examiners consider prior art from both JPO’s search results and their own search results before responding to the applicant with the first office action. In the pilot with the Korean Intellectual Property Office, examiners perform independent searches and examinations, then compare results prior to final office actions. According to USPTO officials, these collaborative efforts could improve searches for patent and nonpatent literature in foreign languages, as examiners from multiple countries will perform searches in their native languages and share the results. Third, USPTO, in collaboration with EPO, designed and implemented a new, joint patent classification system meant to improve examiners’ ability to find relevant patent literature from the United States, Europe, or other offices that adopt the system. However, USPTO and patent examiners have experienced some challenges with this effort. As discussed earlier, by grouping similar patents, classification systems allow examiners to conduct prior art searches independent of an application’s language and wording and can improve search quality by producing results that would not be found from a specific keyword search. According to USPTO officials, the U.S. Patent Classification system became outdated following budget constraints in the early 2000s that undermined the agency’s efforts to update the classification system in response to technological change. After identifying a need to revise its classification system, USPTO partnered with EPO to create a new system to be jointly used by both offices. USPTO officially adopted the Cooperative Patent Classification (CPC) system in January 2015. According to USPTO officials, CPC gives examiners greater access to foreign patent literature compared to the prior system and can be expanded more easily to include new technologies. USPTO has found that classifications for some U.S. patent applications under the CPC system are inconsistent with EPO’s classifications, and USPTO examiners have reported difficulty with the system. Officials from both USPTO and EPO told us that there were disagreements between the two offices on how to classify some applications, such as on which categories to use or the number of categories to which applications are assigned. Additionally, USPTO examiners in certain technology centers reported difficulty with the CPC system in our survey. On the basis of survey responses from our random probability sample of examiners, we found that some examiners find that the CPC classifications make it somewhat or much more difficult to complete a thorough prior art search in the time allotted, as shown in table 4. For example, on the basis of our survey, CPC classifications make it somewhat or much more difficult to complete a thorough prior art search in the time allotted for an estimated 53 percent of the examiners in the Mechanical Engineering, Manufacturing, and Products technology center. On the other hand, an estimated 42 percent of examiners in the Computer Networks, Multiplex Communication, Video Distribution, and Security technology center find CPC classifications make it somewhat or much easier to complete a thorough prior art search in the time allotted. In response to an open- ended survey question, one examiner described their difficulty, saying that proper classification of prior art is essential to quality and efficiency of examination, that classification of prior art in the CPC system is inconsistent, and that many CPC categories still contain large amounts of prior art despite having more detailed technology categories. USPTO officials we interviewed also told us that the number of documents in some CPC categories was greater than those in comparable categories under USPTO’s previous classification system. According to USPTO officials, adopting the CPC system helps examiners give greater consideration to foreign patent documentation. As we note in this report, foreign patent and foreign-language nonpatent literature is sometimes difficult for examiners to find, according to the experts we interviewed and our survey of examiners. However, the CPC system’s usefulness to examiners depends, in part, on the consistency with which the system is applied by its users. Inconsistently applying the classification system to applications undermines its usefulness because similar applications may not be placed in the same category by USPTO and EPO. Also, misclassified applications may require examiners to familiarize themselves with new technologies, which increases the amount of time it takes for them to conduct their prior art searches and may negatively affect their confidence in the results. At USPTO, classification contractors apply an initial classification to applications before they are routed to an examiner. The examiner may revise the classification during examination. To help transition to the new system, USPTO provided examiners and classification contractors with training on how to classify applications and how to search for references using the CPC system, assigned lead quality experts to each technology center to provide training and assistance, and established a feedback tool for examiners to report issues with the new system. According to USPTO officials, the agency has not identified specific reasons that account for differences between its classifications and EPO’s. However, USPTO is updating its contract for initial classification services to require a higher level of expertise and is drafting a plan to assess performance under the new contract. The agreement between USPTO and EPO that supports the CPC system commits both offices to exchange best practices on classifications and aim for a high level of consistency, but it does not include a specific target or method for assessing consistency. USPTO officials told us that they are actively working with EPO to identify ways to measure and improve consistency, and that both offices have developed their own methods for measuring the consistency of CPC classifications. However, as of March 2016, EPO and USPTO had not agreed upon a measure of consistency between the offices. In the context of interagency collaborations, we have previously reported that agencies can enhance and sustain their collaborative efforts by, among other things, defining and articulating a common outcome and establishing mutually reinforcing or joint strategies. Establishing agreement with EPO on a target of consistency in classifying patent applications and a plan for monitoring consistency would help to implement this practice and would establish an internal control to support the reliability and usefulness of the CPC system. Since 2012, USPTO has had authority to receive submissions from third parties that may help identify potentially relevant prior art for published patent applications. This process, sometimes described as crowdsourcing, allows experts and other interested parties to share documents that they believe will aid USPTO’s examination of an application. Prior art submitted by third parties can help identify prior art that is not immediately available to examiners (e.g., because it is not available electronically or without payment) or that may be difficult to find. By potentially reducing the need to search for this prior art, these submissions may address challenges related to the quantity and availability of prior art. According to information from USPTO, third parties submitted prior art for less than 1 percent of the approximately 600,000 patent applications received per year since third-party submissions began in 2012. In addition, based on our survey, we estimate that 83 percent of examiners have rarely or never seen an application with a third-party submission. However, USPTO evaluations of third-party submissions found that they were generally useful to examiners and that the documents were sometimes not available to examiners through USPTO sources. According to USPTO officials we interviewed, an internal review of 300 randomly selected applications for which a third party submitted prior art found that examiners used a submission to reject a patent application in 20 to 25 percent of such cases. Furthermore, USPTO reviewed approximately 2,500 submissions of nonpatent literature prior art it received since September 2012, and found 753 unique, English- language prior art documents (30 percent of the submissions received) that were otherwise not available to examiners. These documents were mostly journal or other scholarly articles, but also included other types of nonpatent literature, such as marketing materials, book chapters, conference proceedings, presentations, and manuals that could not be found in the agency’s internal or external nonpatent literature sources. Because potential third-party submitters have to know about an application before they can submit potentially related art, USPTO has taken steps to allow third parties to more easily monitor published applications. The agency created a subscription service, whereby users will be notified when applications are published that contain user-chosen keywords. Beyond these efforts, USPTO also considered expanding use of third-party submissions of prior art by allowing examiners to use crowdsourcing to solicit such art for specific applications or topics. According to USPTO officials, the agency decided not to pursue this option, at least in part, because the agency is uncertain about its legal authority to do so. Specifically, while statute allows USPTO to collect third-party submissions, the law requires the agency to establish procedures to ensure that no protest or opposition is initiated during examination of a patent application. According to USPTO officials, it is unclear if requests for third-party submissions on a specific application could be viewed as allowing opposition. USPTO is making improvements to its prior art search tools that may help address some of the challenges examiners face in identifying relevant prior art, but USPTO has not developed a strategy to assess incorporating new sources of art into these tools over time. As part of its Enhanced Patent Quality Initiative, USPTO is procuring an automated prior art search capability that could enhance examiners’ ability to identify relevant prior art. In June 2015, USPTO requested information on a search system that uses the claims and specification within a patent application to search for patent and nonpatent literature automatically, without human involvement. In its request for information, USPTO described the intent of the system as providing a useful prior art baseline for patent examiners to begin their own searches. Such a system could improve the search tools available to examiners and help address the challenge of managing the quantity of potentially relevant prior art. USPTO anticipates that a pilot system will be available to a limited number of examiners, and agency officials told us that they anticipate awarding a contract in late summer 2016. The new system could expand upon the capabilities of a currently available system called the Patent Linguistic Utility Service. According to USPTO officials, the Patent Linguistic Utility Service has been in use for 20 years and has limitations that prevent it from meeting the agency’s search needs. For example, the Patent Linguistic Utility Service only searches U.S. patent literature and cannot perform the advanced search techniques contemplated for the new system. USPTO uses the current system to perform searches for about one-tenth of the agency’s incoming applications, whereas USPTO expects to use the new system for every application. Based on our survey, an estimated 12 percent of examiners find that the current system makes it somewhat or much easier to complete a thorough prior art search in the time allotted, whereas 52 percent of examiners expect that an automated pre-examination search would make it somewhat or much easier to complete a thorough prior art search in the time allotted. Moreover, USPTO’s 2014-2018 Strategic Plan includes an objective to ensure optimal information technology services, including upgrading search systems and prior art access. Toward this objective, USPTO is in the process of a major, multiyear $405 million effort to upgrade its information technology tools to provide examiners with a new system to manage all aspects of patent examination, including certain aspects of their prior art searches. The new system, called Patents End-to-End, will, according to USPTO’s Strategic Information Technology Plan for fiscal years 2015 through 2018, replace nearly 20 systems currently used to search patent applications. The new system will initially replicate the prior art search capabilities of USPTO’s current systems, such as EAST and WEST, which focus on U.S. patent literature and include only one source of nonpatent literature. Although searching for nonpatent literature is required by the agency’s manual for patent examiners, under the current and planned systems, examiners need to individually access and search a variety of external sources to look for nonpatent literature during their examinations. Consequently, neither the current nor planned systems provide USPTO examiners with the capability to efficiently search for prior art using a single, integrated search that includes both patent literature and multiple sources of nonpatent literature. The time it takes examiners to search the large and increasing volume of nonpatent literature and the inefficiency of having to search many different sources individually may lead examiners to conduct less thorough searches of nonpatent literature, potentially missing relevant prior art. As discussed earlier, our analysis of examiners’ survey responses and experts’ statements suggests that examiners are less likely to search for certain types of prior art, particularly foreign-language patents and nonpatent literature, from which it is more difficult to find relevant prior art. Integrating additional sources of prior art into USPTO’s search tools is one way USPTO could increase the types and sources of prior art that examiners consider, and would be similar to approaches that EPO and JPO cited as helping examiners consider a wide array of nonpatent literature sources. According to USPTO officials, the capabilities of the new Patents End-to-End system can be expanded in the future to include additional nonpatent literature sources. However, USPTO officials told us that, as of March 2016, the agency does not currently have specific plans to add additional nonpatent literature sources to its new system because of its initial focus on developing parity with the existing system. In addition, as of March 2016, USPTO had not established a documented strategy to identify and assess new sources in the future or the most optimal means of providing access to them. According to federal standards for internal control, control activities are the policies, procedures, techniques, and mechanisms that help ensure that actions are taken to address risks. These activities are an integral part of an agency’s planning to achieve effective results and efficiently manage government resources, including the development and maintenance of information systems. Because information technology changes rapidly, the internal control standards note that controls must evolve to remain effective. These standards also highlight the importance of clearly documenting internal controls. Without a documented, periodically updated strategy to evaluate new sources of prior art to include in the Patents End-to-End system and a process to periodically assess this strategy, USPTO will not have the assurance that it is taking full advantage of its information technology investment to help examiners more efficiently access a variety of resources for their prior art searches. USPTO is taking steps to strengthen monitoring of examiners’ work; however, these efforts may not provide USPTO with adequate data to identify and address shortcomings with examiners’ prior art searches specific to individual technology centers and monitor search thoroughness over time. USPTO uses two methods to review examinations, which may help the agency monitor the effects of the challenges described above as well as the thoroughness of searches. First, USPTO’s Office of Patent Quality Assurance (OPQA) conducts audits of a random sample of office actions, and some of these audits will review examiners’ prior art searches. Prior to recent changes in their audits, reviewers in OPQA performed about 400 audits per year that focused on assessments of examiners’ prior art searches. Second, supervisory patent examiners review examiners’ work products as part of each examiner’s annual performance appraisal. USPTO supervisory patent examiners are required to review at least four office actions of each of their primary examiners per year—with additional reviews for junior examiners—and to evaluate the thoroughness of examiners’ prior art searches during these reviews. However, in recent years, the number and consistency of OPQA staff and supervisory patent examiners’ reviews of examiners’ prior art searches have not been sufficient to examine trends in the thoroughness of prior art searches at the technology center or art unit level. Using the 400 audits OPQA performed annually, USPTO officials said that they could perform statistically valid assessments of prior art searches only for the examiner corps as a whole. Further, because the supervisory patent examiners’ reviews have not been conducted or documented in a consistent manner, USPTO could not examine trends at the technology center or art unit level by combining supervisory reviews with OPQA’s reviews. Specifically, past OPQA audits have evaluated prior art searches by, for example, considering if the examiner used reasonable search terms and synonyms, and may have included an independent search to discover prior art missed by the examiner. Auditors in OPQA documented the results of their reviews, and the office used them to evaluate the examiner corps from year to year. In comparison, according to a USPTO training document, supervisory patent examiners should check that a thorough search was conducted, but should also adjust their reviews based on an examiner’s skills, abilities, and performance history. Supervisory examiners do not need to perform an independent search for prior art. Supervisory examiners may document errors found during their reviews to inform individual employee performance assessments but are otherwise not required to record their reviews. Because USPTO does not require supervisory examiners to document these reviews in a consistent form, the agency cannot analyze the data to examine issues with prior art search quality in specific technology centers and art units. Early in fiscal year 2016, USPTO took two steps that if finalized could enable monitoring of prior art search trends at the technology center or art unit level, according to agency officials. Beginning in November 2015, OPQA made changes to its review processes that along with an increase in staff, should allow the office to perform about 12,000 audits in 2016, according to OPQA officials. In addition, in 2015, USPTO drafted a master review form that could standardize OPQA and supervisory examiner reviews with a single, consistent approach and documentation. OPQA began using a draft version of the form in November 2015, and USPTO began to pilot the form with some supervisory patent examiners in 2016. As of March 2016, USPTO had not made a decision about the final content of the review form, when supervisory patent examiners might begin to use the new form, or how the data from OPQA and supervisory examiner reviews would be used to assess examiners’ prior art searches. Furthermore, despite potential improvements in how OPQA and supervisory patent examiners conduct and document reviews of examiners’ prior art searches, USPTO’s ability to use these data to monitor prior art searches may be limited because USPTO (1) does not have a clear definition of what constitutes a thorough prior art search, (2) may not collect sufficient information to assess examiners’ search strategies or the sources of prior art they consider, and (3) has not established goals and indicators for improving prior art searches. USPTO requires examiners to perform a thorough prior art search and record their search results or search history, but the agency has not clearly defined what constitutes a thorough prior art search. USPTO’s manual for patent examiners requires examiners to conduct a thorough prior art search by identifying the field of search, the appropriate tools, and a search strategy. It also requires examiners to consider U.S. patents, foreign patents, and nonpatent literature, unless they can justify with reasonable certainty that no more pertinent references can be found. USPTO has no single method for assessing the thoroughness of prior art searches because, as supervisory patent examiners told us and our survey results show, examiners’ search strategies and the sources they use will differ based on the technology (see app. III). Examiners have access to guidance and training on classification-based searches, but USPTO has not documented technology-specific guidance—such as by technology center or art unit—on what constitutes a thorough prior art search. Specifically, in its definitions of CPC classes, USPTO provides suggested search areas of related classes, but these suggestions do not specify what USPTO would consider a thorough prior art search for each class or describe the sources of prior art the examiner should consider. As of March 2016, USPTO’s draft of the new master review form did not require OPQA or supervisory examiners to evaluate the thoroughness of an examiner’s search. Instead, the form asks if the examiner (1) searched for prior art associated with the inventor’s name, (2) searched for prior art using classification results for the application, and (3) recorded his or her search strategy. If a reviewer finds that the examiner should have made a rejection but did not, the form asks the reviewer to identify the source of prior art needed for the missed rejection. The March 2016 draft of the form does not include, as an October 2015 draft we reviewed did, questions assessing if search queries were likely to result in identification of relevant prior art. The March 2016 draft of the form also does not include questions that address whether the examiner searched foreign patent literature and U.S. and foreign nonpatent literature, as is required by the agency’s manual for patent examiners. While USPTO has taken steps to improve the quality of prior art searches, the agency has not yet established performance goals and indicators for improving prior art searches. One of the stated objectives in USPTO’s 2014-2018 Strategic Plan is to enhance accurate and consistent results in examination quality, and USPTO’s Enhanced Patent Quality Initiative also aims to achieve excellence in measuring patent quality. However, USPTO’s goals for patent quality in the strategic plan and for the quality initiative do not currently include goals or indicators assessing the thoroughness of prior art searches. Although USPTO is not required to establish goals or indicators for improving prior art searches, we have previously reported that establishing program goals and associated indicators constitutes a leading practice for planning within federal agencies. Among other things, the Government Performance and Results Act Modernization Act of 2010 requires that agencies (1) establish objective, quantifiable, and measurable performance goals and (2) establish performance indicators to measure progress toward each performance goal. The limitations to USPTO’s ability to monitor examiners’ prior art searches at a technology center or art unit level may hinder the agency’s ability to identify and address issues in the thoroughness of examiners’ prior art searches and quality of the agency’s examinations. Federal standards for internal control state that agencies’ monitoring of their internal controls should assess the quality of performance over time, and that internal controls should generally be designed to assure that ongoing monitoring occurs in the course of normal operations. Without consistently collecting the information needed to assess the thoroughness of prior art searches and monitoring at a technology center or art unit level, the agency cannot identify and address issues that are more prevalent in certain technology centers or art areas, such as variations in the extent to which examiners in certain areas search for foreign patents or nonpatent literature. Similarly, without greater clarity on what constitutes a thorough prior art search in different technologies, it will be difficult for USPTO to assess the adequacy of examiners’ searches, and examiners may vary in their thoroughness. Moreover, without goals and indicators for assessing the thoroughness of prior art searches, USPTO cannot reliably assess the thoroughness of its searches or improvement in searches over time. USPTO plans to evaluate changes to the agency’s system for determining the number of applications a patent examiner is expected to review within a specified period of time; however, the agency has not identified plans to assess the amount of time examiners in different technologies need to perform thorough prior art searches. According to USPTO officials we interviewed and a document we reviewed, the time allotted for each individual technology was determined when the examiner performance and production system was first created in the 1970s or when subsequent art units were added. USPTO adjusted the time allotted to examiners between fiscal years 2010 and 2012 and gave all patent examiners a total of 2.5 additional hours per application. However, according to USPTO officials, the agency did not evaluate art unit or technology-specific factors prior to making this change. USPTO also adjusted the time allotted in April 2016, when approximately 1,000 examiners received an additional 2.7 hours for examinations of certain technologies to address concerns related to the transition to the CPC system. According to technical comments USPTO provided on a draft of this report, these changes were based on an initial investigation of time needed to perform a thorough prior art search in these technologies. The time pressures created by USPTO’s system for determining allotted examination time have important implications for examiners’ ability to conduct thorough prior art searches. As noted above, about two-thirds of examiners responding to our survey reported having insufficient time to complete a thorough prior art search given their typical workload. Furthermore, examiners in different technologies may need different amounts of time to conduct thorough prior art searches. For example, we estimate based on our survey that a higher percentage of examiners in the Mechanical Engineering technology center find that they have much less time than needed, compared to examiners in other technology centers (see app. III). Supervisory patent examiners we interviewed told us that prior art searching is the most time-consuming aspect of patent examination. Based on our survey, an estimated 80 percent of examiners spent, on average, from 6 to 20 hours initially examining patent applications and completing a first office action. For the prior art search portion of these efforts, we estimate that examiners spent an average of 8 hours per application during the past quarter. If examiners feel pressure to complete their examinations quickly rather than thoroughly, it could have important implications for their prior art search efforts. In fact, as shown in figure 7, less than half of the examiners—an estimated 46 percent—were moderately or very confident that they found the most relevant prior art during the time allotted. However, when they included their additional voluntary overtime, the number of examiners who were moderately or very confident increased to an estimated 69 percent of examiners. Even with the inclusion of their overtime, though, an estimated 23 percent of examiners remained not at all confident to somewhat confident. Federal standards for internal control specify that agencies should assess the risks the agency faces, including identifying relevant risks associated with achieving the agency’s objectives, assessing a risk’s significance and likelihood of its occurrence, and deciding what actions should be taken to manage the risk. In addition, these standards note that an agency’s operational success requires providing personnel with the right incentives for the job. In USPTO’s 2014-2018 Strategic Plan, the agency indicated its intent to evaluate changes to the its system for evaluating if examiners are completing office actions in the time allotted and to make additional modifications as needed. In November 2015, USPTO’s Commissioner for Patents affirmed the agency’s intent to examine this system and told us that he had committed to the examiners’ union to do so. USPTO has historically faced a backlog of applications, and changing the time allotted may make it more difficult for the agency to address this backlog. However, changing the time allotted to examiners may address the challenge of time pressures on examiners that we discuss above. As of May 2016, USPTO had not clarified the extent to which this evaluation will specifically consider the time needed for a thorough prior art search for different technologies. Without evaluating the technology-specific time pressures examiners face in their efforts to identify prior art in patent examination, USPTO does not have assurance that the time allotted to examiners reflects the evolving complexity of different technologies and their associated prior art. USPTO has taken several actions related to the challenge of ensuring that the agency has an examiner workforce with the technical competence—backgrounds, knowledge, or skills—needed for identifying relevant prior art; however, USPTO does not have a process to assess and measure each technology center’s progress toward closing any gaps in examiners’ technical knowledge and skills. Effective management of an organization’s workforce—its human capital—is essential; in particular, identifying critical occupations, skills, and competencies and analyzing workforce gaps are leading principles in workforce planning, as we and the Office of Personnel Management have previously identified. Further, as described in the federal standards for internal control, all personnel need to possess and maintain a level of competence that allows them to accomplish their assigned duties. Management needs to ensure that skill needs are continually assessed and that the organization is able to obtain a workforce with the skills required to achieve organizational goals. Management also needs to provide employees the right training to develop and retain skill levels to meet changing organizational needs. Accordingly, USPTO developed a human capital strategy, its 2015-2018 People Plan, which replaced its 2011-2015 Strategic Human Capital Plan. The agency also identified the role of patent examiner as a mission- critical occupation and identified the competencies needed for this occupation. These competencies include technical work experience/education and technical competence, which we refer to collectively as technical competence. USPTO defines technical competence as the ability to analyze and interpret written technical materials, rules, regulations, instructions, and reports. According to USPTO officials, specific technical competencies vary depending on the technology examined by each technology center and art unit. USPTO has further documented specific technical competencies for examiners in a series of “job analysis worksheets,” which describe the knowledge, skills, and abilities needed for examiner positions. Agency officials said these were based on the hiring needs derived from a hiring model created by senior management. For example, the job analysis worksheet for electrical engineering patent examiners considers their education, training, and experience in researching, analyzing, and applying scientific principles in specified technical areas, such as image analysis, power systems, and computer architecture. To develop and maintain examiners’ technical competence, USPTO offers three programs, all of which are voluntary for examiners (see table 5). In addition, USPTO officials said that technology centers provide technology-specific training through paid vendors, periodic internal meetings to discuss technical and quality issues, and technology fairs. USPTO does not require examiners to complete a minimum amount of ongoing technical training; however, based on information from USPTO officials, 74 percent of examiners participated in at least one of the 153 technical training events held through the agency’s Patent Examiner Technical Training Program in fiscal year 2015, with an average of 6.7 training hours per participant. The extent to which examiners participated in this technical training program varied by technology center, from a low of 48 percent to a high of 96 percent in fiscal year 2015, as well as by art unit, according to information from USPTO officials. While these are important steps that can help develop and maintain examiners’ technical competence, USPTO officials told us that the agency has not conducted an analysis to identify any gaps in examiners’ competence, either for the agency as a whole or for each technology center. USPTO strategic or human capital plans since 2007 have called for measuring the agency’s performance on closing competency/skill gaps for mission-critical occupations. USPTO officials told us that the technology centers use performance appraisal plans to individually assess examiners’ skills and competency gaps. However, doing so does not address the question of whether broader competency gaps exist at the technology center level or how any gaps can be addressed. Further, using USPTO’s job analysis worksheets in hiring does not eliminate the need for ongoing workforce assessment. In particular, as technologies change, the knowledge and skills required of examiners may evolve accordingly, and examiners may move from one technology center or art unit to another during their careers. According to USPTO officials, there are also times when patent examiners are ultimately assigned to a different technology center upon completion of their initial training than the assignment given to them when they were hired. In addition, USPTO has reported that examiner attrition has increased over the past 5 years. Historically, attrition has been highest amongst examiners at USPTO for less than 5 years. As we found in 2007, attrition of less experienced examiners is a significant loss for the agency, in part because examiners require 4 to 6 years of on-the-job experience before they become fully proficient in conducting patent application reviews. When these staff leave USPTO, the agency loses as much as 5 years of training investment in them, and continuing turnover of many new patent examiners makes the overall workforce less experienced. Moreover, USPTO’s 2015-2018 People Plan notes that that the agency’s ability to retain experienced and high-performing employees may prove difficult with continued improvements to the economy. Because of these factors, the technical knowledge and skills needed in a technology center may differ from the knowledge and skills of the individual examiners assigned to that center. Without periodically evaluating gaps between the technical competence of examiners and the knowledge and skills needed in each technology center, USPTO cannot ensure that it has appropriate strategies, such as training or other efforts, to close any gaps. Moreover, as specified in the federal standards for internal control, ongoing internal control monitoring should assess the quality of performance over time. The agency’s 2015-2018 People Plan, however, does not include measures for monitoring progress in closing gaps. Without monitoring and evaluating progress toward closing any identified gaps in technical competence, USPTO may not have reasonable assurance that examiners in all technology centers have the skills and knowledge to identify relevant prior art during patent examination. Examiners face a number of challenges in their efforts to search for prior art, including the large volume of prior art from multiple sources to consider, unclear patent applications, difficulties identifying or accessing relevant nonpatent literature and prior art in foreign languages, and limits on the time available to search for relevant prior art, among others. Some of these challenges may reinforce one another or affect examiners in some technology centers more than others. USPTO, recognizing the importance of issuing quality patents, is taking a number of steps to improve patent quality, as we discuss in our patent quality report (GAO-16-490). USPTO is also is taking a number of steps to help address challenges to completing a thorough prior art search. However, while USPTO’s steps to address prior art search challenges are promising, opportunities exist for USPTO to address limitations that may hinder the effectiveness of some of its efforts. For example, USPTO has experienced some challenges working with EPO to adopt and consistently implement a new patent classification system. Because USPTO and EPO have not identified a target level of consistency or a plan to monitor consistency between USPTO contractor and EPO classifications—important aspects of interagency collaboration—the potential benefits of adopting the new classification system may be reduced as inconsistently classified patents make it more difficult for examiners to identify relevant U.S. and foreign patent literature. Similarly, USPTO is undertaking a major investment in information technology tools for examiners to manage their work and search for prior art. The new search system USPTO plans has the potential to integrate additional sources of nonpatent literature, an approach that EPO and JPO have taken and cited as helping examiners consider a wide array of nonpatent literature sources. However, USPTO has not developed and documented a strategy to identify and assess the optimal means of incorporating new sources of art into these tools. Such a strategy would be consistent with federal standards for internal control. Developing and periodically updating such a strategy could help USPTO take full advantage of its investment in its new information technology tools to address some of the challenges examiners face in identifying relevant prior art. USPTO is also taking steps to augment the number and consistency with which reviews of examiners’ prior art searches are conducted and documented that could help to address a weakness of its past efforts—an insufficient number of consistently performed reviews to enable USPTO to identify issues with examiners’ prior art searches within individual technology centers. However, USPTO still faces limitations in its ability to use the data to monitor the thoroughness of examiners’ prior art searches. First, USPTO has not clearly defined what constitutes a thorough prior art search for different technology centers, which is important given differences in the appropriate search strategies and prior art resources across technology areas. Second, USPTO’s draft review form, as of March 2016, does not include questions that are important to assess the thoroughness of examiners’ prior art searches, such as the extent to which an examiner searched for foreign patent and nonpatent literature. These are aspects of prior art search that are generally required by USPTO’s manual for patent examiners but that our survey results showed many examiners may not perform. Third, the agency has not established goals or indicators covering prior art search performance. Monitoring the thoroughness of examiners’ prior art searches at a level of rigor and consistency to assess trends within individual technology centers would better enable USPTO to identify issues that vary by technology center and develop appropriate responses as called for by federal internal control standards. Such monitoring would also allow USPTO to have greater assurance that examiners are searching all relevant sources of prior art, including foreign patents and nonpatent literature. Furthermore, establishing goals and indicators for prior art search would help USPTO reliably assess the thoroughness of its searches and improvements over time. Additionally, while monitoring the thoroughness of examiners’ prior art searches is important, it is essential that examiners have sufficient time to perform high-quality work. USPTO expects examiners to examine an application within a specific amount of time. This means that examiners may need to make trade-offs among the amount and types of prior art sources they consider because of the limited time available to perform their searches. However, as we also note in our patent quality report (GAO-16-490), USPTO has not recently assessed the time needed for examinations in different technologies. Without this information, the agency does not know if it is allotting examiners appropriate amounts of time to complete their work. Based on our survey, most examiners find that they do not have sufficient time for thorough prior art searches, and many were not confident in their ability to identify the most relevant prior art without working voluntary overtime. USPTO’s planned effort to evaluate the agency’s system for determining the time allotted for examination could help the agency better understand the time pressures examiners face; however, USPTO has not yet clarified the extent to which the evaluation will look at any differences among art units and technologies in the time needed for thorough prior art searches. Given the importance of thorough prior art searches to USPTO’s patent quality objectives, specifically assessing the time needed for a thorough prior search for different technologies would provide USPTO with greater assurance that it will address the risks posed by the time pressures examiners experience, consistent with federal internal control standards. Finally, examiners’ technical competence is an important factor in understanding patent applications and locating relevant prior art in the time allotted. USPTO has taken steps to identify the technical competencies of various examiner positions and to offer voluntary technical training opportunities to examiners. However, the agency has not conducted an overall analysis of the technical competence of its examiners to identify potential competency gaps for each technology center. Periodically conducting such analyses to reflect evolving technologies and workforce changes, as called for by federal standards for internal control, would give USPTO greater assurance that examiners in all technology centers have the technical skills and knowledge to identify the most relevant prior art during patent examination. Furthermore, analyzing the results of such analyses would help the agency define and prioritize its strategies for closing any competency gaps, such as through training or other efforts. Lastly, these steps, in conjunction with developing measures to monitor progress toward closing any gaps, would help USPTO address the effects of evolving technologies and workforce changes over time. To enhance USPTO’s ability to identify relevant prior art, we recommend that the Secretary of Commerce direct the Director of USPTO to take the following seven actions as the agency continues to implement its efforts to improve prior art searches. To ensure that USPTO’s collaborative efforts on classification help examiners find relevant prior art, USPTO should work with EPO to identify a target level of consistency of Cooperative Patent Classification decisions between USPTO and EPO and develop a plan to monitor consistency to achieve the target. To ensure that USPTO is able to take full advantage of its investment in new information technology tools and capabilities, USPTO should develop and periodically update a documented strategy to identify key sources of nonpatent literature for individual technology centers and to assess the optimal means of providing access to these sources, such as including them in USPTO’s search system. To improve its monitoring of prior art searches and provide USPTO the ability to examine and address trends in prior art search quality at the technology center level, USPTO should take the following three actions: Develop written guidance on what constitutes a thorough prior art search within each technology field (i.e., mechanical, chemical, electrical), technology center, art area, or art unit, as appropriate, and establish goals and indicators for improving prior art searches. Ensure that sufficient information is collected in reviews of prior art searches to assess the quality of searches at the technology center level, including how often examiners search for U.S. patents, foreign patents, and nonpatent literature. Use the audits and supervisory reviews to monitor the thoroughness of examiners’ prior art searches and improvements over time. To ensure that examiners have sufficient time to conduct a thorough prior art search, USPTO should, in conjunction with implementing the recommendation from our patent quality report to analyze the time examiners need to perform a thorough examination, specifically assess the time examiners need to conduct a thorough prior art search for different technologies. To ensure that examiners have the technical competence needed to complete thorough prior art searches, USPTO should assess whether the technical competencies of examiners in each technology center match those necessary; develop strategies to address any gaps identified, such as a technical training strategy; and establish measures to monitor progress toward closing any gaps. We provided a draft of this report to the Department of Commerce for its review and comment. In the Department of Commerce’s written comments, reproduced in appendix IV, the Department concurred with our recommendations. USPTO provided additional technical comments that we incorporated, as appropriate. We also provided a draft of this report to EPO and JPO for their views and comment. EPO indicated it had no comments on the draft, and JPO provided technical comments that we incorporated, as appropriate. In its written comments, the Department concurred with our recommendation to work with EPO to identify a target level of consistency for CPC decisions and develop a plan to monitor consistency. The Department noted ongoing efforts to work with EPO and other patent offices to develop objective metrics to measure the level of consistency. If finalized, such metrics and a plan to monitor consistency could allow USPTO to more fully benefit from adopting the new CPC system. In regard to our recommendation to develop and periodically update a strategy related to key sources of nonpatent literature, the Department concurred and stated that USPTO’s STIC analyzes nonpatent literature sources used by examiners and assesses them for incorporation into USPTO’s search system. While we did not receive evidence of such assessments during our review, they may be a useful step toward a strategy to periodically assess the optimal means of providing examiners access to key nonpatent literature sources and to ensure the effectiveness of USPTO’s planned Patents End-to-End search system. In regard to our recommendation to develop written guidance on what constitutes a thorough prior art search and establish goals and indicators for improving prior art searches, the Department concurred and said USPTO would develop technology-based search training guidance and establish enhanced goals and indicators for improving prior art searches. Specifically in regard to developing guidance on what constitutes thorough prior art searches, USPTO’s technical comments noted that our report did not recognize that USPTO’s definitions of CPC classification areas include suggestions for related classes for search. We adjusted our report to acknowledge the information USPTO provides on these suggested search areas. However, these suggested search areas do not fully establish what constitutes a thorough prior art search for each CPC class, and they do not provide guidance on examiners’ searches for nonpatent literature. In regard to our recommendations to ensure that reviews of examiners’ prior art searches collect sufficient information to assess search quality at the technology center level and use these reviews to monitor examiners’ prior art searches over time, the Department concurred. The Department said it would ensure adequate search data are collected to assess the quality of searches at the technology center level, and would investigate using audits and reviews to monitor the thoroughness of examiners’ prior art searches over time. In its written comments, the Department noted that a great deal of information on examiners’ prior art searches is potentially available from examiners’ records and reviews by primary examiners, supervisors, OPQA, and others. As we describe in our report, these reviews have not been carried out or recorded in a consistent manner in the past. Therefore, as a whole, these data do not provide USPTO with reliable information to assess trends in prior art search quality at the technology center level. USPTO’s recent effort to improve the consistency of these reviews is an important step to address this issue. However, additional steps are needed to ensure that USPTO’s reviews of examiners’ prior art searches collect information on whether, for example, examiners are searching patent literature, foreign patents, and nonpatent literature, as is required by the manual for patent examiners. Without collecting such information at the technology center level, USPTO may be challenged to use its reviews to monitor the thoroughness of examiners’ prior art searches and improvements over time. In regard to our recommendation on assessing the time examiners need to conduct a thorough prior art search for different technologies, the Department concurred and stated that USPTO intends to further investigate the time needed. The Department also mentioned the April 2016 changes to the time allotted for approximately 1,000 examiners, noting that they were based on an initial evaluation of the time examiners need to perform a thorough prior art search. This initial evaluation stemmed from feedback from examiners during the transition to CPC. We did not receive details of this evaluation during our review, but recognize that this action may have provided the agency with important information about the time needed for prior art search in some technologies. We continue to believe that the agency should review the time allotted for all technologies, and not only those identified through feedback related to CPC. Finally, the Department concurred with our recommendation to assess whether the technical competence of examiners in each technology center matches those necessary and develop strategies to address gaps if identified. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Commerce, the Director of the USPTO, the Commissioner for Patents, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report (1) describes the challenges examiners face in identifying relevant prior art, (2) describes how selected foreign patent offices have addressed challenges in identifying relevant prior art, and (3) assesses the extent to which the U.S. Patent and Trademark Office (USPTO) has taken steps to address any challenges in identifying relevant prior art. Our work for this report was coordinated with our report on patent quality. To describe the challenges examiners face in identifying relevant prior art, we reviewed relevant laws and USPTO documents and interviewed USPTO officials; a group of four supervisory patent examiners; officials of the examiners’ union, the Patent Office Professional Association; and a group of five examiners serving as union representatives. We also conducted semistructured interviews with subject matter experts (experts) active in the intellectual property field and knowledgeable about the subject of prior art. We selected and interviewed 18 experts to obtain views of different stakeholder communities, including four academic experts on the basis of a literature search we conducted for articles related to USPTO and prior art; four patent holders from the following technology fields: chemical technologies, electrical technologies, and mechanical technologies, based on data from USPTO that identified the companies receiving patents in these fields from 2010 through 2014; five attorneys based on their leadership (e.g., as chairs or co-chairs) of patent-related committees or sections of various bar associations; representatives of three nongovernmental organizations we identified during background research that had activities or publications related to patents; and two patent data experts who were listed in a 2015 USPTO analysis evaluating search technologies. In addition, we conducted a web-based survey of a stratified random sample of 3,336 eligible USPTO patent examiners from across 8 of the 11 technology-based subject matter groups (referred to as technology centers) into which USPTO examiners are divided. Fielded between August and November 2015, the survey was designed to collect information on challenges USPTO faces in finding relevant prior art and how USPTO might improve its prior art search capabilities. To identify our survey population, we obtained from USPTO a list of patent examiners as of May 2015. We excluded examiners from the following 3 technology centers: We excluded the Designs technology center because these examiners work on design patents instead of utility patents; design patents are outside the scope of this engagement and have different statutory and administrative requirements than utility patents. We excluded examiners who perform “reexamination” work and not initial patent examinations. We excluded examiners in the patent training academy because these examiners are recent hires who are in a 12-month training program. We also excluded examiners employed at USPTO for less than 1 year. We then defined nine strata by technology center, with one technology center separated into two strata, as described in table 6. Specifically, the Transportation, Construction, Electronic Commerce, Agriculture, National Security and License and Review technology center includes a diverse set of technologies, including transportation, construction, agriculture, and business methods. In our review, we separated the art units—subunits of a technology center—focused on electronic commerce and business methods (collectively referred to as business methods) in light of recent legislation and court decisions related to business methods. This resulted in nine strata with a target survey population totaling 7,825 eligible examiners. From this list, we drew our stratified random sample of 3,336 eligible USPTO patent examiners. We received responses from 2,669 eligible examiners for an 80 percent response rate. Because we used a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we quantified the sampling error and express our confidence in the precision of our particular sample’s results at a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Within each stratum and overall, the confidence intervals for survey results for percentages are generally within +/- 5 percentage points. The only estimates for which the confidence intervals exceed 5 percentage points are certain results for the business methods stratum. In these instances, the confidence intervals are from 5 to 6 percentage points. In this report, our figures containing survey results show the upper and lower bounds for estimates at the 95 percent confidence interval. For other estimates in the report, we have not provided the upper and lower bounds in the text or tables; however, those details for all survey results are available in the e- supplement related to this report. The quality of survey data can also be affected by nonsampling error, which includes, for example, variations in how respondents interpret questions, respondents’ willingness to offer accurate responses, nonresponse errors, and data collection and processing errors. To minimize nonsampling error, we took several steps in developing the survey and in collecting and analyzing survey data. Specifically, in developing the survey, we worked with our survey professionals, among other things, to draft questions that were clear and unbiased. We pretested the survey in person with five USPTO staff: three examiners who are also representatives to the examiners’ union, a supervisory patent examiner, and a quality assurance specialist. We used these pretests to check that the questions were clear and unambiguous, used correct terminology, requested information that could be feasibly obtained, and were comprehensive and unbiased. We also obtained comments on the survey from USPTO management and leadership from the examiners’ union. In addition, we obtained a quality review by a separate GAO survey methodologist. Based on these activities, we made changes to the survey before administering it. Further, using a web-based survey provided several advantages, including allowing examiners to enter their responses into an electronic instrument that created an automatic record for each respondent. This eliminated the potential for errors that could have resulted if we had used a manual process to enter respondents’ data from paper surveys. In addition, to account for the complex sample design, we used survey software in our analyses to produce appropriate estimates and confidence intervals, and the programs we used to process and analyze the survey data were independently verified to ensure the accuracy of this work. To minimize nonresponse error, we made a variety of contacts with the sample of examiners during the survey, including follow-up e-mails to encourage responses. In addition, from October 20 through 23, 2015, we attempted to follow up via telephone calls to all 1,102 examiners who had neither completed the survey nor told us that they were no longer examiners. We also analyzed nonresponse bias to (1) assess whether any factors were associated with examiners’ propensity to respond and (2) allow our analysis of respondents to properly reflect the sampling universe of eligible examiners. To adjust the sampling weight for potential nonresponse bias, we used standard weighting class adjustments based on the sampling strata and the examiners’ years of experience at USPTO. The weighted response rate was also 80 percent. In this report and in the related e-supplement, we present the survey results using the nonresponse adjusted weights, which are generalizable to the eligible population of examiners. We analyzed the responses to the survey for all examiners, as well as responses by technology center and by the General Schedule (GS) level of the examiners. We selected three categories of GS levels—less than GS-13, GS-13, and greater than GS-13—because examiners at these levels have different responsibilities and authorities when examining patent applications. Specifically, examiners at the GS-14 level or above generally may grant a patent or reject a patent application without additional review, most examiners below the GS-14 level must have their actions reviewed by a more senior examiner, and some GS-13 examiners are in the process of becoming GS-14 examiners. Question 3 in the survey asked examiners approximately how many hours they spent per application on prior art searches in the past quarter, when preparing the original First Action on the Merits, the means by which examiners initially notify applicants about the patentability of their inventions. Examiners provided a variety of open-ended responses. Some respondents chose to provide a range, such as “5 to 10,” while others provided a single number for hours spent per application on prior art searches. Some examiners also provided responses that did not clearly indicate the approximate hours per application spent on prior art searches in the past quarter; we excluded those responses. Where possible, we coded responses to reflect a range of numbers by assigning a low and a high value; when a single number was provided, we coded that number as both the low and the high value. A second analyst verified the initial analyst’s coding. We checked the sensitivity of results overall, within strata, and within GS level categories; results were not statistically different when using the low value, midpoint (average between low and high values), or high value of a range. As a result, we present results for question 3 based on the midpoint. The 95 percent confidence intervals for these results are within +/- 10 percent of the estimates themselves, except for the estimate for technology center 2100 (Computer Architecture, Software, and Information Security). The estimate for technology center 2100 has a 95 percent confidence interval of within +/- 18 percent of the estimate itself. For some other survey questions, we also reviewed examiners’ open- ended responses on selected topics. We selected those topics based on our interviews with experts and USPTO officials as well as our analysis of closed-ended survey responses. We selected the questions for which examiners’ responses most frequently included keywords we identified for each topic. An analyst conducted a keyword search of all responses to the selected open-ended questions and coded responses containing the keywords. A second analyst verified the initial analyst’s coding. Our report provides some examples or summaries of examiners’ comments based on this review. Examiners’ responses to open-ended questions are not generalizable to other examiners. In addition, because we did not conduct a systematic review of all open-ended responses to our survey, we do not report the exact number of examiners who provided responses on the topics we reviewed. In addition, we conducted statistical tests of association on the results of certain survey questions; all tests were independently verified to ensure their accuracy. All tests of association were carried out at the 5 percent level of significance and were Cochran-Mantel-Haenszel (CMH) Chi- square tests of general association. The testing was carried out in SUDAAN, which is statistical software appropriate for the analysis of survey data. The null hypothesis was that there is no association between the two tested variables. When the association between two variables, conditional on a third variable, is of interest, this is referred to as the stratum-adjusted CMH test. The test statistic is Wald Chi-Square. Specifically, among other survey questions, we performed CMH tests on examiners’ responses to questions on how often they search for foreign patent literature or foreign-language nonpatent literature, and how difficult it is to obtain relevant art from these searches. We also performed CMH tests on responses to questions on how often examiners search for certain types of prior art and whether they have sufficient time to complete a thorough prior art search. The types of art included were foreign patent literature, scientific articles or presentations, foreign- language nonpatent literature, and industry-related nonpatent literature. We performed these tests on responses overall as well as conditional on the examiners’ technology center. To describe how selected foreign patent offices have addressed challenges in identifying relevant prior art, we first reviewed information identifying the world’s five largest patent offices. From the list of the other four offices similar in size to USPTO, we selected the European Patent Office (EPO) and the Japan Patent Office (JPO) based on comments observers made about the quality of their work. We conducted site visits with EPO and JPO, during which we interviewed officials from these offices and reviewed documents they provided. We also interviewed stakeholders knowledgeable about the offices’ practices, such as academics who study these offices. Because these individuals were not necessarily knowledgeable about USPTO practices, they are not included among the 18 experts discussed above. To assess the extent to which USPTO has taken steps to address any challenges in identifying relevant prior art, we reviewed documents from the agency related to prior art search procedures and capabilities, including ongoing and planned initiatives related to information technology resources or capabilities, the examiner workforce and related human capital management efforts, training practices, and international cooperation. We also interviewed or obtained written responses to our questions from officials from USPTO’s Office of the Commissioner for Patents, Office of Patent Examination Policy, Office of Patent Information Management, Office of International Patent Cooperation, Office of Human Resources, and Office of Patent Quality Assurance, among others, and conducted interviews with technology center directors, supervisory patent examiners, and representatives from the USPTO examiners’ union. In addition, we reviewed the results of the survey of examiners. In assessing USPTO’s efforts, we identified criteria in the federal standards for internal control; the Government Performance and Results Act of 1993, as amended; USPTO’s Manual of Patent Examining Procedure; and USPTO’s strategic plan. We conducted this performance audit from November 2014 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Patent Examination Process at the U.S. Patent and Trademark Office (USPTO) (Corresponds to Fig. 1) This appendix provides details on steps in the patent examination process, including rollover information, depicted in figure 1. Based on our survey results, U.S. Patent and Trademark Office (USPTO) patent examiners in some technology centers or at some General Schedule (GS) levels face certain challenges in identifying relevant prior art more than examiners in other technology centers or at other GS levels. Tables 8 through 15 provide information related to the effects of the quantity of prior art, how often examiners search for certain types of prior art and how difficult they find it to obtain relevant art from certain types, the sufficiency of examiners’ time to complete a thorough prior art search, average time examiners spent on prior art searching and each First Action on the Merits, examiners’ overtime, how often examiners encountered misclassified patent applications, and continuing education provided by USPTO in examiners’ art areas. Because we surveyed a generalizable stratified random sample of examiners, our results provide estimates for each technology center included in our study. See the related e-supplement, GAO-16-478SP, for the original survey language and more detailed results. In addition to the contact named above, Chris Murray (Assistant Director), Tind Shepper Ryen, Krista Breen Anderson, Hilary Benedict, Richard Burkard, John Delicath, Alice Feldesman, Armetha Liles, Rebecca Makar, Rob Marek, Eleni Orphanides, Kelly Rubin, Monica Savoy, Ardith Spence, and Sonya Vartivarian made key contributions to this report. Intellectual Property: Assessing Factors That Affect Patent Infringement Litigation Could Help Improve Patent Quality. GAO-13-465. Washington, D.C.: August 22, 2013. U.S. Patent and Trademark Office: Performance Management Processes. GAO-10-946R. Washington, D.C.: September 24, 2010. Intellectual Property: Enhanced Planning by U.S. Personnel Overseas Could Strengthen Efforts. GAO-09-863. Washington, D.C.: September 30, 2009. U.S. Patent and Trademark Office: Hiring Efforts Are Not Sufficient to Reduce the Patent Application Backlog. GAO-08-527T. Washington, D.C.: February 27, 2008. U.S. Patent and Trademark Office: Hiring Efforts Are Not Sufficient to Reduce the Patent Application Backlog. GAO-07-1102. Washington, D.C.: September 4, 2007. Intellectual Property: Improvements Needed to Better Manage Patent Office Automation and Address Workforce Challenges. GAO-05-1008T. Washington, D.C.: September 8, 2005. Intellectual Property: Key Processes for Managing Patent Automation Strategy Need Strengthening. GAO-05-336. Washington, D.C.: June 17, 2005. Intellectual Property: USPTO Has Made Progress in Hiring Examiners, but Challenges to Retention Remain. GAO-05-720. Washington, D.C.: June 17, 2005.
USPTO examines patent applications to ensure that inventions are, among other requirements, novel and not obvious. USPTO patent examiners accomplish this by comparing applications to “prior art”—existing patents and applications in the United States and abroad, and nonpatent literature, such as scientific articles. Thorough prior art searches help ensure the validity of granted patents. GAO was asked to identify ways to improve patent quality through use of the best available prior art. This report (1) describes the challenges examiners face in identifying relevant prior art, (2) describes how selected foreign patent offices have addressed challenges in identifying relevant prior art, and (3) assesses the extent to which USPTO has taken steps to address challenges in identifying relevant prior art. GAO surveyed a generalizable stratified random sample of USPTO examiners with an 80 percent response rate; interviewed experts active in the field, including patent holders, attorneys, and academics; interviewed officials from USPTO and similarly sized foreign patent offices, and other knowledgeable stakeholders; and reviewed USPTO documents and relevant laws. Experts and U.S. Patent and Trademark Office (USPTO) examiners described a variety of challenges in identifying information relevant to a claimed invention—or “prior art”—that can affect examiners' ability to complete a thorough prior art search in the time allotted and their confidence in their search efforts. These challenges include, among others, the quantity and availability of prior art, the clarity of patent applications, and USPTO's policies and search tools. The European Patent Office and Japan Patent Office face similar challenges to USPTO in identifying prior art and use various approaches to help address them, such as leveraging work of other patent offices on related patent applications and integrating nonpatent literature into their search tools. In some cases, these approaches are coordinated with, similar to, or could inform USPTO actions. USPTO has taken actions to address challenges in identifying prior art, but some actions have limitations. For example, USPTO is in the process of upgrading its search tools. However, examiners will still need to access a variety of external sources to meet USPTO's requirement to consider nonpatent literature. Federal internal control standards call for controls to evolve to remain effective and USPTO officials noted that the new search system can be expanded to include more nonpatent literature as the European and Japan patent offices have done. However, USPTO does not have a documented strategy for identifying additional sources. Without such a strategy, USPTO cannot be assured that its information technology investment will improve examiners' searches. USPTO is also taking steps to augment the number of, and consistency with which, reviews of examiners' work are conducted and documented, which could improve USPTO's monitoring of examiners' work. However, USPTO still faces limitations in assessing the thoroughness of examiners' prior art searches, because, for example, the agency has not established goals or indicators for search quality and may not be collecting sufficient information on examiners' searches to assess prior art search quality. Without monitoring examiners' prior art searches, the agency cannot be assured that examiners are searching all relevant sources of prior art and may not be able to develop appropriate responses as called for by federal internal control standards. GAO is making seven recommendations, among them, that USPTO develop a strategy to identify key sources of nonpatent literature, establish goals and indicators for prior art search quality, and collect sufficient information to assess prior art search quality. USPTO concurred with GAO's recommendations.
Enlisted servicemembers can be separated from the military when they are found to be unsuitable for continued military service. According to DOD regulations, enlisted servicemembers can be separated for reasons such as misconduct, failure to overcome substance abuse, and certain mental health conditions, including a personality disorder. A personality disorder by itself does not make enlisted servicemembers unsuitable for military service. DOD requires that the disorder be severe enough that it interferes with an enlisted servicemember’s ability to function in the military. DOD and the military services require that to diagnose a personality disorder a psychiatrist or psychologist use criteria established in the Diagnostic and Statistical Manual of Mental Disorders (DSM), which was developed by the American Psychiatric Association. Similarly, in the private sector, clinicians use criteria in the DSM to diagnose a personality disorder, but in some instances, clinicians other than psychiatrists or psychologists, such as licensed clinical social workers, may make this diagnosis. Diagnosing a personality disorder in a servicemember who has served in combat can be complicated by the fact that some symptoms of a personality disorder may be similar to symptoms of combat-related mental health conditions. For example, both personality disorder and PTSD have similar symptoms of feelings of detachment or estrangement from others, and irritability. According to the American Psychiatric Association and the American Psychological Association, the only way to distinguish a personality disorder from a combat-related mental health condition, such as PTSD, is by getting an in-depth medical and personal history from the servicemember that is corroborated, if possible, by family and friends. According to DOD officials, the three key requirements that the military services must follow when separating an enlisted servicemember are designed to help ensure that enlisted servicemembers are separated for the appropriate reason. Documentation of compliance with these requirements is to be included in the separation packet found in the enlisted servicemember’s personnel record, as required by the military services. The separation packet is required to contain other documents related to the enlisted servicemember’s separation. According to officials from the military services, the servicemember’s immediate commander gives the separation packet to an installation official who is to review the packet to verify that the requirements for the personality disorder separation have been met. If this review verifies that the requirements have been met, the separation packet is then sent to a commander at the installation who has authority for approving a personality disorder separation for that enlisted servicemember. This commander is a higher- level officer than the enlisted servicemember’s immediate commander. A military installation may have more than one commander who has the authority to approve separations because of a personality disorder. However, each commander with separation authority approves separations only for enlisted servicemembers under his or her command. Once enlisted servicemembers have been separated from military service, they receive certificates of release from the military, which include information on the reason for separation and an official characterization of their time in the service. For enlisted servicemembers separated because of a personality disorder, their certificates of release would state that the reason for their separation was a personality disorder. Employers may request to see separated servicemembers’ certificates of release to verify their military service, and employers may make employment decisions based on the information they see on servicemembers’ certificates of release. Enlisted servicemembers have protections available to them when going through the separation process. All enlisted servicemembers can submit statements on their own behalf to the commander with separation authority, consult with legal counsel prior to separation, and obtain copies of the separation packet that is sent to the commander with separation authority. In addition, enlisted servicemembers with 6 or more years of military service are eligible to request a hearing before an administrative board. An administrative board hearing allows enlisted servicemembers to have legal representation, call witnesses, and speak on their own behalf in defending against the recommended separation. The board includes at least three members who, following a hearing, make a recommendation to the commander with separation authority as to whether the enlisted servicemember should be separated. Enlisted servicemembers also have protections available to them after they have been separated. They may challenge the reasons given for their separations after they have been separated from the military. Within 15 years after separation from the military, enlisted servicemembers may appeal their separation to a discharge review board. Further, enlisted servicemembers may appeal the discharge review board’s decision by applying to a board for the correction of military records. The four military installations we visited varied in their compliance with DOD’s three key requirements for personality disorder separations. For the four installations, compliance with the first requirement—to notify enlisted servicemembers of their impending separation because of a personality disorder—was at or above 98 percent. For the second requirement, that enlisted servicemembers must be diagnosed with a personality disorder by a psychiatrist or psychologist who determines that the personality disorder interferes with servicemembers’ ability to function in the military, the compliance rates ranged from 40 to 78 percent. Compliance ranged from 40 to 99 percent for the third requirement, that enlisted servicemembers receive formal counseling about their problem with functioning in the military. Our review of the documentation in the enlisted Navy servicemembers’ separation packets found that compliance varied by requirement. Across the four installations, the percentage of enlisted servicemembers’ separation packets that documented compliance with the notification requirement ranged from 98 to 100 percent. Of the 312 enlisted servicemembers’ separation packets included in our review, only 4 did not contain documentation that the servicemembers received notification that they were being separated because of a personality disorder. We did not assess whether the separation packets for these 4 servicemembers had documentation that indicated compliance for the remaining two key separation requirements. Across the four installations, the percentage of enlisted servicemembers’ separation packets that had documentation indicating compliance with all three parts of the second requirement—that enlisted servicemembers separated because of a personality disorder (1) be diagnosed with a personality disorder (2) by a psychiatrist or psychologist who (3) determines that the personality disorder interferes with servicemembers’ ability to function in the military—ranged from 40 to 78 percent. Noncompliance with this requirement occurred in two ways: enlisted servicemembers’ separation packets did not contain the medical form used to document the three parts of this requirement or servicemembers’ separation packets contained the medical form but documentation on the form for one or more of the three parts of this requirement was missing or incorrect. Figure 1 summarizes the four installations’ compliance rates for this requirement. We found that 34 enlisted servicemembers’ separation packets did not contain a medical form, which is used to document compliance with the three parts of this requirement. We also found that of the enlisted servicemembers’ separation packets that contained a medical form, the medical form in 66 of these packets did not contain information needed to fulfill all three parts of the requirement. For example, 27 of these 66 enlisted servicemembers’ medical forms had documentation indicating that the servicemember had been diagnosed with a personality disorder, but there was also information in the medical form indicating that the diagnosis was not made by a psychiatrist or psychologist. In some of these cases, we found that the diagnosis of a personality disorder was made by a licensed clinical social worker or other type of provider, such as a battalion surgeon. We found that compliance with the requirement that enlisted servicemembers receive formal counseling about their problem with functioning in the military ranged from 40 to 99 percent. Across the four installations, we found that 42 enlisted servicemembers’ separation packets did not contain a counseling form documenting that servicemembers received formal counseling as required. As a result, these 42 servicemembers’ separation packets were noncompliant with this requirement. Figure 2 summarizes the four installations’ compliance rates for this requirement. Our review of the documentation in 59 enlisted Navy servicemembers’ separation packets found that compliance varied by requirement. Of the separation packets that we reviewed, 95 percent had documentation indicating that enlisted servicemembers had been notified of their impending separation because of a personality disorder. (Three enlisted servicemembers’ separation packets did not contain documentation of this requirement, and as a result, we did not assess compliance with the remaining two requirements for these three servicemembers’ separation packets.) The requirement that enlisted servicemembers be diagnosed with a personality disorder by a psychiatrist or psychologist who determines that the personality disorder interferes with servicemembers’ ability to function in the military had a compliance rate of 82 percent for the 56 remaining enlisted Navy servicemembers’ separation packets that we reviewed. Of the 56, we found that 1 enlisted Navy servicemember’s separation packet did not contain a medical form, which is used to document compliance with the three parts of this requirement. We also found that 9 of the 56 enlisted Navy servicemembers’ separation packets contained a medical form, but did not have documentation indicating compliance with all three parts of this requirement. Most of these—6—did not have documentation indicating that the diagnosis of a personality disorder was made by a psychiatrist or psychologist. For the requirement for formal counseling, 77 percent of the 56 enlisted Navy servicemembers’ separation packets contained documentation that enlisted servicemembers received formal counseling about their problem with functioning in the military. DOD does not have reasonable assurance that its key personality disorder separation requirements have been followed. DOD policy directs the military services to implement and ensure consistent administration of DOD’s requirements for separating enlisted servicemembers because of a personality disorder. In turn, according to officials in each of the military services, the military services delegate to commanders with separation authority at the military installations sole responsibility for ensuring that the requirements are followed for enlisted servicemembers under their command. According to military officials at the installations we visited, to ensure compliance with DOD’s key separation requirements, the commander with separation authority has an official at the installation examine the enlisted servicemember’s separation packet prior to the separation to determine that all requirements have been met. Military officials responsible for reviewing the separation packets at the installations we visited explained that when the official who is reviewing the separation packet discovers that a requirement for separation has not been documented, the reviewing official is supposed to take steps to resolve the situation. For example, if the official reviewing the separation packets does not find documentation that enlisted servicemembers have been formally counseled about their problem with functioning in the military, the reviewing official would verify that the formal counseling had occurred and then obtain documentation of that counseling session. Similarly, a Navy legal official told us that enlisted servicemembers’ separation packets should be reviewed to make sure that DOD’s key separation requirements have been met before the separations are approved. When we asked about the low rates of compliance for some of the separation requirements that we found at the Army, Air Force, and Marine Corps installations we visited and for the enlisted Navy servicemembers’ records that we reviewed, the military officials responsible for reviewing the separation packets with whom we spoke could not explain why these separations were approved if compliance with the separation requirements was not documented in the separation packet. Having given sole responsibility to the commanders with separation authority to ensure compliance, the military services have not established a way to determine whether these commanders are ensuring that DOD’s key requirements are met. Furthermore, DOD does not have reasonable assurance that its requirements for separating enlisted servicemembers because of a personality disorder have been followed. At the four installations we visited, enlisted servicemembers who were separated because of a personality disorder varied in the extent to which they selected the protections available to them during the separation process, depending on the specific protection. Based on our review of separation packets in the enlisted servicemembers’ personnel records, we found that a small proportion of enlisted servicemembers—12 percent— stated that they wanted to submit statements on their own behalf to the commander with separation authority. Of these servicemembers who submitted a statement, 18 percent submitted a statement that either questioned whether the diagnosis of a personality disorder was an accurate diagnosis or requested not to be separated. All of these servicemembers were separated. We also found that 38 percent of enlisted servicemembers at the installations we visited stated that they wanted to consult with legal counsel prior to their separation. According to legal officials at the installations we visited, enlisted servicemembers may seek legal counsel to discuss the implications of a personality disorder separation, seek advice on how to stay in the military, or obtain information on their eligibility for Department of Veterans Affairs’ benefits, such as health and educational benefits, after separation. For enlisted Navy servicemembers whose separation packets we reviewed, 5 percent selected to submit statements and 5 percent selected to consult with counsel prior to separation. Based on our review of enlisted servicemembers’ separation packets for the installations we visited, we found that the majority of servicemembers requested copies of their separation packets, which are sent to the commander with separation authority. Specifically, 289 of 312 enlisted servicemembers in our review at the four installations—93 percent— requested copies of their separation packets, while 66 percent of enlisted Navy servicemembers in our review requested copies of their separation packets. We also found that no enlisted servicemembers—either at the installations we visited or among the enlisted Navy servicemembers whose separation packets we reviewed—requested a hearing before an administrative board prior to their separation. Enlisted servicemembers may challenge the reason given for their separation to a discharge review board after the separation has been completed. For the four installations we visited and for enlisted Navy servicemembers, we found that three enlisted servicemembers applied to their military service’s discharge review board to challenge the reason for their separation. Of these three, one servicemember received a change to the reason for separation because the discharge review board found that the separation because of a personality disorder was unjust. For this servicemember, the reason for separation was changed from personality disorder to the reason of secretarial authority of that military service. The other two servicemembers who applied for a change to their reason for separation did not receive a change because the discharge review board found that the documentation present in the personnel record supported the personality disorder separation. The two servicemembers who were unsuccessful in their appeal to the discharge review board did not choose to appeal the discharge review board’s decision to the board for the correction of military records. DOD has established requirements that are intended to help ensure that enlisted servicemembers separated because of a personality disorder are separated appropriately. Failure to comply with these requirements increases the risk of enlisted servicemembers being inappropriately separated because of a personality disorder. For enlisted servicemembers, the stakes are high because a personality disorder separation can carry a long-term stigma in the civilian world. Because DOD relies on the military services to ensure compliance with its key personality disorder separation requirements, and because the military services rely solely on commanders with separation authority to ensure compliance with these requirements, there is a lack of reasonable assurance that the requirements have been met. During our review of enlisted servicemembers’ separation packets at the four military installations and for enlisted Navy servicemembers’ separation packets we reviewed, the low rates of compliance we found for some of the key personality disorder separation requirements indicate that the military services need a system, beyond relying on the commanders who are making separation decisions, to ensure compliance with DOD’s personality disorder separation requirements. Additionally, DOD needs to monitor the military services’ compliance with these requirements. Until this happens, DOD does not have reasonable assurance that personality disorder separations of enlisted servicemembers have been appropriate. To help ensure that DOD’s requirements for personality disorder separations are met and to help increase assurance that these separations are appropriate, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to 1. direct the Secretaries of the Army, the Air Force, and the Navy and the Commandant of the Marine Corps to develop a system to ensure that personality disorder separations are conducted in accordance with DOD’s requirements and 2. ensure that DOD monitors the military services’ compliance with DOD’s personality disorder separation requirements. In written comments on a draft of this report, DOD concurred with our recommendation that the military services develop a system to ensure that personality disorder separations are conducted in accordance with DOD’s requirements. DOD partially concurred with our recommendation that DOD monitor the military services’ compliance with DOD’s personality disorder separation requirements. DOD stated that it will strengthen policy guidance related to the military services’ standardized compliance reporting, but that it is the responsibility of the military services to ensure compliance with DOD policy. However, as we stated in our draft report, DOD’s reliance on the military services to ensure compliance with its separation requirements has not provided reasonable assurance that these requirements will be followed. We believe that the low rates of compliance we found for some of DOD’s key personality disorder separation requirements suggest the need for another system to ensure compliance with these requirements, as well as the need for DOD to monitor the military services’ compliance. DOD suggested that we change the title of our draft report to indicate that our subject area was personnel management and not defense health care. We have not changed the title. For an enlisted servicemember to be separated because of a personality disorder, the servicemember must first be diagnosed as having a personality disorder. Therefore, we consider our review of DOD’s separation process for servicemembers with personality disorders a review of a health care issue. In its comments, DOD also identified two inaccuracies in our description of DOD’s separation requirements. DOD pointed out that its policy does not state that a servicemember’s written notification of the impending separation has to come from a servicemember’s commander, as we indicated in our draft report. According to DOD, the policy does not specify who must provide this written notification. We revised our draft report to clarify our discussion of this requirement. However, this change did not affect the results of our compliance review because we determined compliance based on whether servicemembers’ separation packets contained a notification letter and not on who notified the servicemember. DOD also pointed out that its policy does not state that servicemembers must receive formal counseling from their supervisors about their problem with functioning in the military, as we stated in our draft report. According to DOD, the policy does not state who should provide the formal counseling to the servicemember; however, we were told by a DOD separation policy official that the counseling should be done by the servicemember’s supervisor. We revised our draft report to clarify our discussion of this requirement. This also did not change the results of our compliance review because we assessed compliance based on whether servicemembers’ separation packets contained a counseling form and not on who counseled the servicemember. DOD also provided technical comments, which we incorporated as appropriate. DOD’s written comments are reprinted in appendix II. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; and appropriate congressional committees and addressees. We will also provide copies to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To meet our objectives, we examined Department of Defense (DOD) separation regulations that the military services are required to follow to help ensure that enlisted servicemembers are separated for the appropriate reasons. For our review, we examined (1) the extent to which selected military installations complied with DOD requirements for separating enlisted servicemembers because of a personality disorder, (2) how DOD ensures compliance with personality disorder separation requirements by the military services, and (3) the extent to which enlisted servicemembers who are separated because of a personality disorder selected protections available to them. For this review, we included enlisted servicemembers from the Army, Air Force, Navy, and Marine Corps. We included only enlisted servicemembers because officers are able to resign at any time rather than be involuntarily separated. We included enlisted servicemembers who deployed at least once in support of Operation Enduring Freedom (OEF) or Operation Iraqi Freedom (OIF). The Coast Guard was excluded because it is under the direction of the Department of Homeland Security and represents a very small portion of servicemembers deployed in support of OEF and OIF. For this review, enlisted servicemembers are those in the active duty component and Reserve component—reservists and National Guard members—who were discharged or released from active duty from November 1, 2001—the first full month of combat operations for OEF— through June 30, 2007—the latest date for which data were available from DOD at the time of our review. We obtained data from DOD’s Defense Manpower Data Center (DMDC) on the number of enlisted servicemembers who had been separated from the military because of a personality disorder from November 1, 2001, through June 30, 2007. These data came from DMDC’s Active Duty Military Personnel Transaction File and DMDC’s Reserve Components Common Personnel Data Transaction File, which are databases that contain servicemember-level data, such as Social Security number, education level, date of birth, pay grade, separation program designator code, and reenlistment eligibility. The Active Duty Military Personnel Transaction File contains a transaction record for every individual entrance, separation, or reenlistment in the Army, Air Force, Navy, and Marine Corps within a specific time frame. The Reserve Components Common Personnel Data Transaction File contains this information for every individual entrance, separation, or reenlistment in the Army National Guard, Army Reserve, Air National Guard, Air Force Reserve, Navy Reserve, and Marine Corps Reserve within a specific time frame. We also asked that DMDC indicate, from its Contingency Tracking System Deployment File, if any enlisted servicemembers who were separated because of a personality disorder were also deployed, for at least one tour of duty, in support of OEF or OIF. The Contingency Tracking System Deployment File is a database that includes data elements for all servicemembers deployed in support of OEF/OIF. A contingency tracking system deployment is defined as a servicemember being physically located within the OEF or OIF combat zones/areas of operation, or specifically identified by the military service as directly supporting the OEF/OIF mission outside of the designated combat zone. We determined that the DMDC data were sufficiently reliable because we corroborated these data with information in the enlisted servicemembers’ personnel records. Based on our analysis of the data provided by DMDC, we selected four military installations across the Army, Air Force, and Marine Corps to visit based on whether the installation had the highest or second highest number of enlisted OEF/OIF servicemembers separated because of a personality disorder for that service. We selected one installation each from the Air Force and the Marine Corps. We selected two Army installations because at the time of our review, the Army had the majority of servicemembers deployed in support of OEF/OIF when compared with the Air Force and the Marine Corps. Among Marine Corps installations, we selected Camp Pendleton, in California, which had the second highest number of enlisted servicemembers separated because of a personality disorder during this time period. This installation was selected because the Marine Corps installation with the highest number of enlisted servicemembers separated because of a personality disorder was in the midst of a deployment cycle and requested that we not visit. The other military installations we selected were Fort Carson (Army), Colorado; Fort Hood (Army), Texas; and Davis-Monthan Air Force Base (Air Force), Arizona. In addition to the four military installations we visited, we visited Naval Base San Diego. We selected Naval Base San Diego based on DMDC’s data, which identified this naval base as having the second highest number of enlisted OEF/OIF Navy servicemembers separated because of a personality disorder from November 1, 2001, through June 30, 2007. During the course of our review, Navy officials at this base told us that enlisted Navy servicemembers selected for our review were transferred to the transient personnel unit at Naval Base San Diego from a Navy ship at various points in the separation process. According to a Navy official, most enlisted Navy servicemembers were diagnosed, formally counseled, and notified of their impending separation while on board a Navy ship and were transferred to the transient personnel unit at Naval Base San Diego to receive their certificates of release. Other enlisted Navy servicemembers were diagnosed, formally counseled, and notified of their impending separation while at Naval Base San Diego. We could not generalize our findings to Naval Base San Diego because some of the elements of the separation process could have been completed while these servicemembers were on board a Navy ship. Therefore, we have reported the results of our review of enlisted Navy servicemembers’ records separately from our presentation of findings based on our review of the other four military installations. To determine the extent to which the four military installations and enlisted Navy servicemembers’ records that we reviewed complied with DOD personality disorder separation requirements, we reviewed DOD’s and the military services’ enlisted administrative separation regulations and instructions to identify the key requirements for separating enlisted servicemembers because of a personality disorder. We also interviewed officials at each of the military services’ headquarters who are responsible for overseeing separation policy. We interviewed additional officials at each of the four selected installations and at Naval Base San Diego, including mental health providers, staff judge advocates, legal counsel with defense services, unit commanders, administrators of the Medical Evaluation Board, and officials in the transition/separation offices, to understand the administrative separation process. Additionally, to determine whether the selected installations and enlisted Navy servicemembers’ records that we reviewed complied with DOD’s requirements for separating servicemembers because of a personality disorder, we obtained and reviewed the personnel records of selected servicemembers to verify that their certificates of release indicated that they were separated because of a personality disorder. We obtained these records from each military service’s central repository, where the personnel records of servicemembers who have been separated from the military are stored. According to military service officials responsible for separation policy, the separation packet, which is found in the enlisted servicemember’s personnel record, is required to contain documents related to the separation, including documents indicating that DOD’s three key requirements have been met. For three of the installations we selected, we reviewed the personnel records of a random, generalizable sample of enlisted servicemembers who deployed at least once in support of OEF/OIF and who were separated from that installation because of a personality disorder from November 1, 2001, through June 30, 2007. For the other installation we selected, we reviewed the personnel records of all enlisted servicemembers who deployed at least once in support of OEF/OIF and who were separated from that installation because of a personality disorder from November 1, 2001, through June 30, 2007, because the number of servicemembers separated from that installation was too small to draw a random, generalizable sample. In total, we included 343 enlisted servicemembers’ personnel records across the four installations. Of these 343 records, 312 enlisted servicemembers’ personnel records were included in our documentation review because their personnel records contained separation packets, which we needed to review to determine compliance. Of the 31 servicemembers’ personnel records that were excluded from our review, 3 had separation packets that were illegible. The remaining 28 servicemembers’ personnel records did not have separation packets available for our review. We also obtained 94 enlisted Navy servicemembers’ personnel records from the Navy’s central repository, where the personnel records of servicemembers who have been separated are stored after they leave the Navy. We reviewed the personnel records of all enlisted Navy servicemembers who deployed at least once in support of OEF/OIF and who were separated from Naval Base San Diego because of a personality disorder from November 1, 2001, through June 30, 2007, because the number of enlisted servicemembers separated from Naval Base San Diego was too small to draw a random, generalizable sample. We reviewed these personnel records to determine if they contained separation packets, which are required by the Navy. Of the 94 enlisted Navy servicemembers, 59 servicemembers’ personnel records were included in our review because their records contained separation packets, which were needed for us to determine compliance. We excluded 35 enlisted Navy servicemembers’ personnel records from our evaluation of compliance. One enlisted servicemember’s separation packet was illegible and 34 enlisted servicemembers’ separation packets were not available for review. In our review, we determined compliance for each of the three key personality disorder separation requirements by reviewing the documentation in the enlisted servicemembers’ separation packets to see if it indicated compliance with that requirement. If the enlisted servicemember’s separation packet did not include documentation that the servicemember had been notified of the impending separation because of a personality disorder—one of the key requirements for a personality disorder separation—we did not assess compliance with the other two key requirements. Table 2 describes the criteria we used to determine compliance. Our review of compliance can be generalized to each of the four installations we visited, but not to the military services. For enlisted Navy servicemembers whose separation packets we reviewed, we cannot generalize to Naval Base San Diego or to the Navy. To determine how DOD ensures compliance by the military services with requirements for separating enlisted servicemembers because of a personality disorder, we reviewed DOD regulations and interviewed DOD and the military services’ officials responsible for separation policy. Additionally, we interviewed military officials responsible for legal services at the installations we visited and at Naval Base San Diego about how they ensure compliance with DOD’s key requirements for personality disorder separations. To determine the extent to which enlisted servicemembers at the four installations we visited and enlisted Navy servicemembers selected the protections available to them during the separation process, we reviewed the same 371 enlisted servicemembers’ separation packets as we reviewed to determine compliance with DOD’s personality disorder separation requirements—312 separation packets for enlisted servicemembers from the Army, Air Force, and Marine Corps installations and 59 separation packets for enlisted servicemembers from the Navy. Enlisted servicemembers are given a list of the protections available to them and select protections from this list, which are included in servicemembers’ separation packets. From our review of the separation packets, we determined whether enlisted servicemembers selected the protections available, but did not determine whether servicemembers received the protections that they selected. To determine the extent to which enlisted servicemembers selected protections available after being separated, we obtained information from each military service’s discharge review board and board for the correction of military records. Using this information, we determined whether the same 371 enlisted servicemembers, whose separation packets we reviewed to determine compliance with DOD’s personality disorder separation requirements, challenged the reason for their separation. We conducted this performance audit from May 2007 through August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Mary Ann Curran, Assistant Director; Sarah Burton; Christie Enders; Krister Friday; Becky Hendrickson; Martha R.W. Kelly; Lisa Motley; Jason Vassilicos; and Suzanne Worth made key contributions to this report.
At DOD, a personality disorder can render a servicemember unsuitable for service. GAO was required to report on personality disorder separations and examined (1) the extent that selected military installations complied with DOD's separation requirements and (2) how DOD ensures compliance with these requirements. GAO reviewed a sample of 312 servicemembers' records from four installations, representing the Army, Air Force, and Marine Corps, that had the highest or second highest number of Operation Enduring Freedom or Operation Iraqi Freedom servicemembers separated because of a personality disorder. The review is generalizable to the installations, but not to the services. GAO also reviewed 59 Navy servicemembers' records, but this review is not generalizable to the installation or the Navy because parts of the separation process could have been completed at multiple locations. GAO's review of enlisted servicemembers' records found that the selected military installations GAO visited varied in their documented compliance with DOD's requirements for personality disorder separations. DOD has requirements for separations because of a personality disorder, which is defined as an enduring pattern of behavior that deviates markedly from expected behavior and has an onset in adolescence or early adulthood. The three key requirements established by DOD are that enlisted servicemembers (1) must be notified of their impending separation because of a personality disorder, (2) must be diagnosed with a personality disorder by a psychiatrist or psychologist who determines that servicemembers' personality disorder interferes with their ability to function in the military, and (3) must receive formal counseling about their problem with functioning in the military. For the four installations, compliance with the notification requirement was at or above 98 percent. The compliance rates for the requirement related to the personality disorder diagnosis ranged from 40 to 78 percent. For the requirement for formal counseling, compliance ranged from 40 to 99 percent. GAO's review of the documentation in the enlisted Navy servicemembers' records found that compliance varied by requirement. Ninety-five percent of enlisted Navy servicemembers' records had documentation indicating that enlisted servicemembers had been notified of their impending separation because of a personality disorder. Eighty-two percent had documentation that indicated compliance with the requirement that enlisted servicemembers must be diagnosed with a personality disorder by a psychiatrist or psychologist who determines that the personality disorder interferes with servicemembers' ability to function in the military. Seventy-seven percent had documentation indicating compliance with the requirement for formal counseling. DOD does not have reasonable assurance that its key personality disorder separation requirements have been followed. DOD policy directs the military services to implement and ensure consistent administration of DOD's requirements for separating enlisted servicemembers because of a personality disorder. According to military service officials, the military services delegate to commanders with separation authority at military installations sole responsibility for ensuring that the separation requirements are followed for enlisted servicemembers under their command. When asked about the low rates of compliance for some of the separation requirements that GAO found, military officials responsible for reviewing the servicemembers' records with whom GAO spoke could not explain why these separations were approved if compliance with the separation requirements was not documented in the servicemembers' records. The military services have not established a way to determine whether the commanders with separation authority are ensuring that DOD's key separation requirements are met, and DOD does not have reasonable assurance that its requirements have been followed.
Under the LLRW Policy Act of 1980, as amended, the federal government is responsible for the disposal of LLRW owned or generated by DOE. DOE defines LLRW as all radioactive waste that does not fall within other classifications, such as spent (used) nuclear fuel and other high-level waste. Mixed waste is LLRW with hazardous components, such as lead and mercury. LLRW can include material of varying levels of radioactivity, from barely contaminated soil and debris to LLRW with enough radioactivity to require remote handling. LLRW can include items such as contaminated equipment, protective clothing, rags, and packing materials and is managed at multiple sites under a variety of contractors. (See app. I for a list of DOE sites that disposed of the majority of LLRW in fiscal years 2004 and 2005.) DOE sites typically dispose of LLRW at (1) on-site facilities, if suitable capacity is available, (2) DOE’s regional disposal facilities at the Hanford Site or the Nevada Test Site, or (3) a commercial facility. The selection of the disposal facility is based partly on the facility’s waste acceptance criteria. These criteria specify the allowable types and amounts of radioactive materials, and types of containers acceptable at the disposal facility. In 2000, we reported that DOE had not developed full life-cycle costs for its disposal facilities or established guidance to ensure that its contractors base their disposal decisions on departmentwide considerations of cost- effectiveness, among other things. We also reported in 2001 that cost analyses concerning the use of DOE’s on-site disposal facilities should be periodically updated to take into account changing economic conditions. Subsequently, the House Committee on Appropriations directed DOE to prepare an objective analysis of the life-cycle costs of LLRW disposal for various federal and commercial disposal options. The committee was concerned that DOE needed to include in its life-cycle cost analysis certain cost elements, such as packaging, transportation, disposal, and postclosure maintenance and surveillance. In response, in its 2002 report to Congress on life-cycle cost analysis of LLRW disposal, DOE listed among its next steps for EM sites to consider the cradle-to-grave costs as they make LLRW management decisions. On July 18, 2002, EM issued guidance directing each site office to develop the mechanisms necessary to ensure that contractors’ LLRW disposal decisions include the best estimate of full cradle-to-grave costs and analysis of alternatives. Several other documents on life-cycle cost analyses are also available. For example, DOE has a cost-estimating guide, developed in the mid-1990s, that provides a chapter dedicated to life-cycle cost analysis, including definitions, processes, limitations, common errors made in life-cycle cost analysis, methods, examples, and diagrams. In addition, although not directly applicable to LLRW management, guidance and manuals prepared by other federal agencies for other DOE programs may be useful to the sites in explaining life-cycle cost analysis methods. For example, the National Institute of Standards and Technology has published two documents on life-cycle cost analysis that are applicable to DOE’s Federal Energy Management Program. DOE sites prepare various types of cost analyses in making LLRW management decisions, but these analyses do not consistently use complete, current, or well-documented life-cycle cost analysis to ensure that the lowest-cost LLRW management alternatives are identified. As a result, the decisions the sites make may not take into account the most cost-effective alternative. These inconsistencies have occurred, in large part, because DOE’s guidance lacks necessary detail and its oversight of contractor practices is weak. Complete life-cycle cost analysis is cradle to grave and includes all costs associated with the management and disposal of LLRW. As DOE’s 2002 report to Congress explained, the costs preceding disposal vary greatly and can be significantly greater than the actual cost of disposal. As a result, DOE concluded it is essential to consider pre-disposal costs as well as disposal costs. Table 1 shows the cost elements of a complete life-cycle cost analysis, according to DOE’s 2002 report. DOE LLRW generator sites we visited did not always include all life-cycle costs—including the postclosure costs of long-term maintenance and surveillance of the disposal site—and did not always consider alternative actions when deciding on how to manage and dispose of LLRW. For example, despite DOE’s guidance to include all disposal costs in its life- cycle cost analyses, DOE contractors at two sites—Rocky Flats, Colorado, and Paducah, Kentucky—did not consistently consider postclosure costs in the analyses supporting their LLRW disposal decisions for fiscal year 2004. In contrast, the contractor at Fernald, Ohio, prepared a life-cycle cost analysis that included estimated postclosure costs for both the Nevada Test Site and for Envirocare of Utah, a commercial disposal facility. Nevada Test Site officials told us they do not include these future costs in their disposal fees because they operate on an annual appropriated funds basis. Nevada Test Site officials estimated that if they were to include postclosure costs in their fee, these costs would add an additional $2.38 per cubic foot of waste to the fee. Envirocare of Utah, on the other hand, includes the estimated postclosure costs in its disposal fees, as required by the state of Utah. Costs for certain LLRW activities vary widely among disposal sites and should be considered in preparing life-cycle cost analysis. For example, EM’s 2002 report to Congress found that costs for one predisposal cost element—waste characterization—can be higher for wastes shipped to the Nevada Test Site and the Hanford Site for disposal than for wastes sent to Envirocare of Utah. Waste characterization costs for the two DOE sites ranged from $130 to $2,400 per cubic meter, while these same costs ranged from $30 to $880 per cubic meter at Envirocare of Utah. The major factors contributing to this cost differential are (1) required procedures for accepting, handling, and disposing of LLRW with higher levels of radioactivity at the Nevada Test Site and Hanford and (2) the higher cost to the generator of characterizing wastes that are shipped in containers to the Nevada Test Site and Hanford Site for disposal. Although waste characterization is an important element in life-cycle cost analysis, the Rocky Flats contractor did not include the costs of these activities in its cost analysis. In addition, waste generators do not always include potential lower-cost alternatives when making LLRW decisions. For example, in fiscal year 2004, the Paducah contractor shipped 600 cubic meters of LLRW in trucks to Envirocare of Utah. Although in its preliminary analysis, the site contractor believed that using rail could save 25 percent in transportation costs, contractor officials indicated they did not validate these preliminary assumptions or complete a formal cost analysis of the rail option. DOE contractors’ cost analyses are not always current. Despite DOE’s 2002 recommendation that cost estimates should be revisited periodically, one DOE waste generator disposed of large volumes of LLRW in fiscal year 2004 on the basis of cost studies completed several years earlier. Specifically, the contractor at Fernald acknowledged shipping over 100,000 cubic meters of LLRW to Envirocare of Utah in fiscal year 2004, using a cost analysis completed in 1994. This analysis, while considering all life- cycle cost elements, had not been updated during this 10-year period to account for any changes that might have occurred in cost elements, such as changes in disposal rates, costs for packaging, treatment, or transportation. For example, disposal rates charged by Envirocare of Utah can change from year to year, based on price discounts offered for larger LLRW disposal volumes. We also found that three of the five DOE sites that had expanded on-site facilities since 2002 did not complete an analysis comparing the life-cycle costs of on-site and off-site disposal alternatives. A 2001 congressional conference report requires DOE to perform such an analysis “before proceeding with any new on-site disposal cell.” DOE asserts that the report language does not apply to ongoing facility development or expansion. Officials at two sites indicated they did not believe they needed to complete such a life-cycle cost analysis because the expansion of their on-site disposal facility was already accounted for in the initial facility design, completed before 2002. The third site completed a life-cycle cost analysis of LLRW waste streams for its on-site facility. However, site officials did not complete a life-cycle cost analysis of off-site disposal because they assumed that the costs of off-site transportation and disposal would be significant enough to preclude the off-site option. Although the remaining two sites completed life-cycle cost studies comparing on-site and off-site disposal costs, these studies were not submitted to the congressional appropriations committees. DOE contractors’ cost analyses are not always well documented. In some cases, we could not determine how contractors incorporated cost analyses into their disposal decisions because documentation was incomplete. According to DOE and contractor site officials at Rocky Flats, disposal decisions were at times based on noncost factors, such as schedule or safety. For example, a 2003 cost study determined that using trucks to transport building debris to a nearby rail loading area less than 1 mile away would be more cost-effective than extending a rail line to the building. However, contractor officials told us they decided to build a rail extension to the building being demolished because the extra traffic at the site caused by trucks hauling the LLRW to the rail line could endanger the health and safety of the workers. This decision, however, was not documented. Contractor officials at Rocky Flats agreed that such LLRW management decisions were not consistently documented to show the rationale for how cost was balanced against other factors. At other sites, cost analyses were informal and not documented. For example, contractor officials responsible for LLRW disposal at Paducah told us that they made some disposal decisions informally because they believed their knowledge of the factors involved made it unnecessary to complete a formal analysis. In addition, Oak Ridge contractor officials coordinating the removal of LLRW from the site told us they did not complete a formal analysis of disposal options for each waste stream because their contract did not require such an analysis. DOE sites have not consistently used life-cycle cost analysis, in part because EM’s 2002 guidance memo on life-cycle cost analysis lacks the necessary detail for how and when to use it. Consequently, each site was responsible for deciding how to incorporate cost into its LLRW management decisions. For example, although EM’s guidance directed sites “to develop mechanisms necessary to establish that its LLRW disposal decisions include the best estimate of full ‘cradle to grave’ costs and analysis of alternatives,” the guidance did not do the following things: Lay out a systematic, consistent method for (1) analyzing all cost elements or (2) comparing key alternatives within these cost elements to determine the lowest cost. Consequently, as we found, analyses often did not include cost elements that might have altered a disposal decision. Specify when or under what circumstances sites should prepare cost analyses. As we found, some sites did not update their analyses to show that their original LLRW management decisions were still supported by current economic conditions; Refer sites to relevant DOE orders, manuals, or other reference materials that could provide consistent direction on life-cycle cost analysis. Such references could include, for example, the DOE order for real property asset management, the DOE manual on preparing life- cycle cost estimates, Office of Management and Budget guidance for completing a cost-effective analysis, and the National Institute of Standards and Technology guidance for completing life-cycle cost analysis, or portions of these documents. Lay out how final LLRW management decisions should be documented. For example, the guidance does not explain how sites should weigh disposal costs against noncost factors such as safety and health. As we found, without adequate documentation at some of the sites we visited, it was difficult for site contractors to justify the decisions they had made. DOE site offices were ineffective in overseeing contractors’ use of life- cycle cost analysis, which also contributed to ineffective implementation of the guidance. At the sites we visited, neither DOE nor the contractors had taken identifiable steps to implement the guidance on life-cycle cost analysis. First, DOE has not incorporated life-cycle cost guidance into contracts. Most of the incentive-based contracts at the sites we visited require contractors to comply with DOE Order 430.1A on life-cycle asset management, which requires the use of life-cycle cost analysis. However, neither that order, nor its successor, DOE Order 430.1B, provide sufficient detail on life-cycle cost analysis definitions, methods, examples, or diagrams that would be useful in preparing such analyses. In contrast, DOE’s cost-estimating guide provides a chapter dedicated to life-cycle cost analysis. This chapter includes definitions, processes, limitations, a list of common errors made in life-cycle cost analysis, methods, examples, and diagrams. However, the estimating guide is not explicitly cited in DOE Order 430.1A or 430.1B, or in the site contracts. As a result, the contractor official responsible for controlling LLRW costs at Rocky Flats, for example, could not tell us whether the contractor used DOE’s cost-estimating guide, particularly the chapter on life-cycle cost analysis in LLRW management decisions, because he was not familiar with the guide. Second, DOE field offices have not taken steps to implement guidance or to evaluate contractors’ use of life-cycle cost analysis. For example, contractor officials at Paducah were not aware of EM’s July 18, 2002, guidance memo on life-cycle cost analysis until we showed a copy to them at the time of our visit. In addition, in October 2002, DOE’s Rocky Flats Field Office sent a memo to its contractor, Kaiser-Hill Company, concerning this EM guidance. According to the memo, the department was already aware that the contractor used licensed commercial disposal facilities and that disposal decisions considered technical acceptability, schedule, and cost benefit; the field office therefore concluded that the mechanisms to establish cost-effective disposal decisions by Kaiser-Hill were already in place and thus satisfied the intent of the EM guidance. However, we found no indication at any of the sites we visited that DOE officials had specifically assessed the contractor’s use of life-cycle cost analysis in making LLRW management decisions. When we brought our concerns to EM officials on the inconsistent use of life-cycle cost analysis at the sites, they responded that EM has relied on the use of incentive-based contracts to ensure contractors are making cost- effective LLRW management decisions, rather than encouraging the use of life-cycle cost analysis. Incentive-based contracts provide specific incentives for specified performance outcomes, often driven by site- specific goals and objectives in areas such as health, safety, schedule, cost, or other areas, as negotiated between DOE and the contractor. We recognize that incentive-based contracts might help DOE meet goals such as accelerated cleanup and that these contracts may, in some cases, reduce overall site costs. However, their use may not necessarily identify lowest- cost waste management alternatives, unless the contract provides this specific focus. Since the department relies on incentive-based contracts, it is critical that the contract’s total estimated cost be based on, among other things, life-cycle cost analyses of LLRW management alternatives and that the contract specify the proper use of life-cycle cost analysis. Without the proper use of life-cycle cost analysis in establishing and overseeing incentive-based contracts, DOE cannot be assured that the contractor has identified the lowest life-cycle cost alternatives for LLRW management. For example, the Rocky Flats contractor, operating under an incentive-based contract, prepared various analyses of transportation alternatives from 2000 to 2003, but these analyses did not comprehensively address sitewide LLRW disposal needs because they were incomplete and not updated. Specifically, two DOE contractor draft studies in 1999 and 2000 indicated that adding rail as an alternative for shipping LLRW from Rocky Flats to off-site disposal facilities could save millions of dollars in transportation costs. Despite this cost-saving potential, the contractor decided in 2000 to rely exclusively on trucks for all Rocky Flats LLRW shipments. Subsequently, in 2002, the contractor analyzed transportation alternatives specifically for shipping certain contaminated LLRW soil off- site. Although the analysis concluded that using rail to transport this soil alone could save up to $216,000, the contractor continued using trucks exclusively in fiscal year 2003 and most of fiscal year 2004 to transport this waste to Envirocare of Utah. In 2003 the contractor determined that the total volume of this LLRW soil would be significantly higher than previously estimated, further increasing the cost-saving potential of using rail, but nevertheless did not update or formalize the analysis. Instead, the contractor decided to send the soil by rail only after determining that it would use rail for shipping debris from an altogether separate LLRW project at Rocky Flats. In September 2004, the site began to transport the LLRW soil by rail, after it had already sent over 4,200 truck shipments of soil to Utah in fiscal years 2003 and 2004. Use of rail instead of trucks to ship the LLRW soil might have saved the site over $4 million during fiscal year 2004. Comprehensive, complete, and current analyses of transportation alternatives for sitewide LLRW disposal needs might have better identified the lowest-cost transportation alternative, therefore providing an opportunity for reducing LLRW management costs for the site. In April 2005, as part of our ongoing engagement, we briefed the Subcommittee on Energy and Water Development, House Committee on Appropriations, on the preliminary results of our work. We stated that DOE LLRW generators were not consistently using life-cycle cost analyses in their disposal decisions because of poor guidance and weak oversight. One month later, in its report to accompany the fiscal year 2006 energy and water appropriations bill, the full Appropriations Committee emphasized its intention to have DOE use life-cycle cost analysis in LLRW management decisions. Using our preliminary findings, the committee noted its concern with the department’s reliance on incentive-based contracts as a mechanism for ensuring cost-effective decision making rather than using life-cycle cost analyses, as directed. According to the committee, while contractors should pursue cost- effective cleanup activities at their sites, it is up to the federal management responsible for those contractors to provide guidance and make decisions that benefit the whole DOE complex. As such, the committee directed the Secretary of Energy to report to the committee within 30 days of enactment of the 2006 Energy and Water Development Appropriations Act, on the specific steps the department will take to ensure that contractors use life- cycle cost analysis in considering LLRW options, and that DOE maintains a viable oversight function to oversee the implementation of such guidance. The committee further recommended that a third of EM’s budget for managing the cleanup program, or $82,924,000, be withheld until after the Secretary of Energy delivers a report to the committee. To better coordinate disposal efforts among sites and program offices, increase efficiencies, and minimize life-cycle costs, DOE has begun developing a national LLRW disposition strategy. Although DOE expects to begin implementing this strategy by March 2006, specific schedules have not yet been established for when the strategy will be fully in place, and it faces several significant challenges. These include developing a database that can be used to manage LLRW complexwide and overcoming organizational obstacles created by the department’s varied missions. DOE has recognized that its current approach---having each site responsible for developing mechanisms necessary to control costs—may result in cost inefficiencies and could limit its ability to meet departmentwide strategic objectives, such as accelerated waste cleanup and site closure. To overcome these problems, EM has begun developing a National Disposition Strategy, which it plans to implement in 2006. EM plans to use the strategy to evaluate predisposal, storage, treatment, and disposal options across the department. The focus of the strategy will be on DOE LLRW that is shipped off-site for disposal and on waste for which DOE currently has no treatment or disposal options. EM hopes to make specific recommendations regarding waste without treatment or disposal options, develop a LLRW database, and reduce predisposal costs. To implement a successful strategy, EM expects to integrate sites’ waste disposition plans by (1) identifying and quantifying LLRW by waste category and site, (2) developing potential treatment and disposal options, and (3) identifying federal and commercial site capabilities for disposal of LLRW. DOE has not yet established specific schedules for when the strategy will be fully in place. EM plans to develop this national disposition strategy in two phases. In Phase I, EM will examine those DOE sites that now have significant quantities of EM LLRW, including Oak Ridge, Savannah River, Idaho National Laboratory, Hanford (including the Office of River Protection), Fernald, Portsmouth (in Ohio), and Paducah (in Kentucky). DOE will also take into account LLRW requiring disposal from fiscal year 2005 to about fiscal year 2035. In Phase II, EM will examine the LLRW managed by other DOE program offices, such as NNSA and the Office of Science. Efforts in Phase II will require considerable coordination among different DOE program offices. To develop and implement its national strategy for LLRW disposition, DOE needs basic data—both current and forecasted—from individual sites on their disposition plans. However, EM does not have complete data, either for its own sites or for non-EM sites with LLRW. Although DOE continues to report progress in disposing of LLRW, the LLRW volumes it reports as needing disposal are not complete. EM’s databases do not include all LLRW expected to be generated in the future as part of ongoing environmental cleanup or waste produced by non-EM generators. This information may be time-consuming and costly to obtain from the different program offices. For example, when we sought information on current and forecasted LLRW volumes from the Office of Science, NNSA, and the Office of Nuclear Energy, Science, and Technology (Nuclear Energy), only the Office of Science provided the requested information. NNSA and Nuclear Energy did not provide this information because, according to officials from each of these program offices, the information was not readily available. Regarding cost information, EM’s 2002 report to Congress recommended that DOE sites consider all life-cycle costs in evaluating alternatives for LLRW management, but it cautioned that DOE’s data collection and reporting processes needed to be improved to make any departmentwide cost analyses useful. EM officials stated that they will consider LLRW costs in their National Disposition Strategy. Currently, according to EM, DOE does not have uniform requirements for defining, monitoring, and reporting waste disposal costs, and sites may differ significantly in their protocols for collecting cost information. However, EM agrees that if DOE is to use life- cycle cost analysis to improve the bases for sites’ disposal decisions, standardized protocols for collecting and reporting the data would have to be established. DOE recognizes these problems and has begun to develop some information it needs to support the evolving disposition strategy. Specifically, DOE is determining (1) what data it needs; (2) whether it can use the data in existing databases or has to develop a new database; and (3) how these data should be organized in a database. EM’s ability to develop an integrated strategy for managing LLRW is further complicated by the fact that DOE has multiple program and site offices with different missions, and these offices oversee a variety of site contractors who manage waste with many different characteristics. DOE’s experience with the use of a supercompactor at its Oak Ridge site illustrates the difficulty EM faces in developing a waste disposition strategy that covers multiple program offices. At this site, EM and NNSA program offices have their own contractors that are responsible for various activities, including managing or disposing of LLRW. In 1997, DOE awarded BNFL a 6-year fixed-price contract to decontaminate and decommission three buildings once used to enrich uranium at the Oak Ridge gaseous diffusion plant. These buildings comprised more than 4.8 million square feet and housed more than 328 million pounds of material. To dispose of this waste, BNFL had constructed a supercompactor, the largest of its type in the nuclear industry. Using this supercompactor, the contractor was able to reduce the volume of several thousand tons of LLRW by 75 percent and save an estimated $100 million in LLRW management and disposal costs. Despite the supercompactor’s potential for reducing LLRW volumes and lowering costs for the other program offices at the Oak Ridge site, the contractor, with the approval of the DOE site office, decided in 2004 to dismantle the supercompactor and ship it as LLRW to Envirocare of Utah for disposal. According to NNSA officials at the Y-12 Plant, also located at the Oak Ridge site, they have contaminated buildings that need to be dismantled and disposed of, but neither DOE nor the contractor consulted with NNSA officials about the potential use of the supercompactor for NNSA’s ongoing compacting needs. Similarly, contractor officials at EM’s Paducah Site in Kentucky, which is about 300 miles away, stated that they might have benefited from the use of the supercompactor but were not given the opportunity to consider alternatives to its disposal. For example, Paducah had about 37,000 tons of remaining scrap metal, as of June 26, 2005, that its current on-site compactor is incapable of crushing, according to a contractor official at the Paducah site. A DOE official at the Oak Ridge site stated that it would probably not be cost-effective to ship debris to the supercompactor from other sites, and the supercompactor could not cost-effectively be relocated. However, neither DOE nor contractor officials provided any documentation of cost analysis to support this statement. Although the dismantling, shipping, and disposal of the supercompactor may have been the correct decision, DOE did not conduct a departmentwide assessment of volume reduction needs and capabilities, and the costs or potential obstacles associated with maintaining or moving the supercompactor under various LLRW management alternatives. Consequently, DOE may have missed a potential cost-saving opportunity. Oak Ridge officials told us that they are currently developing an integrated disposition plan to better coordinate LLRW management activities specifically for the Oak Ridge site. According to DOE, other integrated activities underway at Oak Ridge include, among other things, a pilot program between EM and the Office of Science to dispose of LLRW that needs no further storage or processing. As a result of lawsuits and state regulatory and legislative actions in two states—Washington and Nevada—DOE cannot currently rely on either of its federal disposal facilities—Hanford or the Nevada Test Site—to dispose of mixed LLRW. Consequently, DOE is incurring increased costs for storage and treatment. Texas may provide DOE with new disposal options, but not sooner than December 2007. Specifically: In July 2004, Washington state asked a U.S. district court to prohibit DOE from sending LLRW from other DOE sites to Hanford for disposal. DOE voluntarily suspended LLRW shipments pending the court’s decision. In May 2005, the court ruled in favor of the state, issuing a preliminary injunction prohibiting DOE from sending LLRW from other sites to Hanford for disposal. In addition, in November 2004, Washington state voters passed an initiative, now incorporated in Washington state law, that would prohibit DOE from accepting out-of- state waste until existing waste at Hanford is cleaned up. The scope and constitutionality of the initiative are currently being litigated in federal district court. DOE officials told us that its inability to ship mixed LLRW to Hanford from other states is increasing costs and may delay cleanup and closure plans at several sites. For example, at Rocky Flats, approximately 1,000 cubic meters of mixed LLRW, intended for disposal at Hanford, instead had to be shipped off-site for commercial treatment, temporary storage, and eventual disposal at Envirocare of Utah to avoid delaying site cleanup; the Rocky Flats contractor estimates incremental storage, handling, treatment, and disposal costs of this LLRW may exceed $8 million. In Nevada, as of August 2005, DOE was still awaiting approval from state regulators for a permit to dispose of, at the Nevada Test Site, mixed LLRW from other sites. After DOE filed its permit application in December 2000, Nevada objected to DOE’s planned method of disposal. DOE is working with the state regulators to achieve a mutually agreeable resolution, and state officials indicate this issue could be resolved by the end of 2005. Until DOE receives this permit, DOE cannot dispose of mixed LLRW generated at other sites at the Nevada Test Site. In 2004, the Nevada Attorney General objected to DOE’s plan to ship certain LLRW from DOE’s Fernald, Ohio, site for disposal at the Nevada Test Site, asserting in a letter to DOE that the plan violated federal law and regulations. Pending a resolution of these issues, DOE signed a $7.5 million contract in April 2005 with a commercial facility in Texas to temporarily store 6,800 cubic meters of this LLRW for up to 2 years. Texas may provide DOE with additional storage options. In February 2005, the state approved a license amendment for Waste Control Specialists to enlarge its LLRW storage facility. In addition, the state has begun a technical review of WCS’s application for a LLRW disposal facility license, which could be issued by December 2007. Given the large volumes of LLRW generated by DOE activities, it is imperative that DOE recognize the importance of life-cycle cost analysis in identifying the most cost-effective alternatives for managing LLRW and then weighing the cost of these alternatives against noncost factors, such as safety and schedule. However, EM’s July 2002 guidance on life-cycle cost analysis did not include information on how or when such an analysis should be completed. Moreover, the department has not performed oversight to ensure that contractors are completing life-cycle cost analyses. EM has elected not to encourage the use of life-cycle cost analysis in making LLRW management decisions, relying instead on incentive-based contracts to ensure contractors are making cost-effective decisions. However, we believe that this contract mechanism does not necessarily ensure that contractors identify the lowest-cost LLRW management options. Without complete, well-documented life-cycle cost analysis, EM may be overlooking cost-saving opportunities that could have resulted from pursuing alternative disposal options. Furthermore, this lack of transparency diminishes confidence in DOE’s ability to ensure that contractors have considered life-cycle costs, regardless of whether the lowest-cost alternative is selected. Although DOE has been disposing of LLRW for decades, it still lacks an integrated national strategy for doing so. Such a departmentwide strategy is crucial for ensuring that LLRW management needs throughout DOE are identified and addressed in a cost-effective manner that also meets other departmental goals, such as timely site cleanup. Specifically, an integrated approach could help consolidate similar types of LLRW to obtain economies of scale and lower per-unit disposal costs across the complex. DOE will need to develop basic information on LLRW volumes departmentwide and by program office, and to overcome the challenges posed by DOE’s complex organization and multiple missions, and recent state actions. To promote cost-effective LLRW management, we are recommending that the Secretary of Energy take the following four actions: Prepare comprehensive guidance on life-cycle cost analysis that, at a minimum, specifies (1) a systematic, consistent method of analyzing all cost elements or of comparing key alternatives within these cost elements to determine the lowest cost; (2) when and under what circumstances sites should prepare cost analyses; (3) relevant DOE orders, manuals, or other reference materials that should be consulted to provide consistent direction on how and when to perform the analysis; and (4) how final LLRW management decisions should be documented to demonstrate that life-cycle cost factors were adequately weighed against noncost factors, such as safety, health, or schedule. Incorporate the revised life-cycle cost guidance into new or existing site contracts or into the departmental orders cited in those contracts. Direct DOE to oversee contractors to ensure that site contractor officials properly use life-cycle cost analyses in evaluating LLRW management alternatives. Actively promote and monitor the development of a timely, national LLRW management strategy that is based on departmentwide data on LLRW needing disposal, and ensure that the implementation of the strategy is fully carried out. We provided DOE with a draft of this report for review and comment. Overall, DOE generally agreed with our conclusions and thanked us for the recommendations, but disagreed with or wanted to clarify certain statements in the draft report and provided technical comments, which we incorporated as appropriate. Specifically, DOE agreed that its sites are not consistently using life-cycle cost analysis in making LLRW management decisions. It also agreed that its current guidance and oversight in the area of life-cycle cost analysis for LLRW management decisions should be strengthened and noted that it is currently reevaluating its guidance documents and their implementation. In addition, DOE expressed appreciation for our support of an effective National Disposition Strategy for LLRW management, and expects this strategy to be available by March 2006. DOE also provided comments on several specific statements in our report. First, DOE disagreed with our statement on the lack of an effective, integrated approach for LLRW management at Oak Ridge and offered examples of integration, which we have incorporated into our report. Nonetheless, we found that not all LLRW activities at Oak Ridge were integrated into a sitewide LLRW management strategy. For example, NNSA officials told us their future need to decontaminate and decommission numerous buildings on the site had not yet been included in any sitewide LLRW management strategy. Second, in its technical comments, DOE stated that our discussion of the supercompactor at Oak Ridge was misleading and did not agree that cost savings would have been realized if the supercompactor had been retained and redeployed to another site. We believe that our discussion of the supercompactor is accurate. It was intended to illustrate the difficulty EM faces in developing a waste disposition strategy that covers multiple program offices. In its technical comments, DOE told us that the contractor at Oak Ridge completed a cost analysis and decided that the supercompactor should not be reused. Nevertheless, neither DOE nor contractor officials provided us with any documentation of a cost analysis to support the dismantling and disposition of the supercompactor. DOE also told us that the contractor who owned the supercompactor and Oak Ridge management “openly solicited” other contractors in the complex about potentially reusing the supercompactor but did not find any interest. However, NNSA officials at Oak Ridge told us that neither DOE nor the contractor consulted with them about the potential use of the supercompactor, and the contractor at Paducah told us that it might have benefited from the supercompactor but was not given the opportunity to consider alternatives to its disposal. Finally, DOE also stated that the lack of consistency that we found in implementing cost guidance and preparing formal documentation should not be interpreted to mean that the department’s waste disposal systems are necessarily inefficient or overly expensive, and asserted that flexibility is needed in the level of detailed cost analysis required. However, we did not conclude that the lack of consistent implementation and the lack of documentation was indicative of an inefficient or overly costly LLRW management system. Rather, we stated that we could not determine how contractors incorporated costs analyses into their disposal decisions because documentation did not exist or was incomplete. Conclusions cannot be drawn about the cost-effectiveness of LLRW management decisions if contractors do not adequately document their decisions for not using life-cycle cost analysis and DOE does not require them to do so. While we would agree that flexibility may be important in determining the level of cost analyses required, we believe this flexibility should be accompanied by proper documentation to support the level of analysis completed and the degree to which life-cycle cost principles were followed. DOE’s comments on our draft report are presented in appendix II. We are sending copies of the report to the Secretary of Energy, the Director of the Office of Management and Budget, and appropriate congressional committees. We will make copies available to others on request. In addition, the report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff contributing to this report are listed in Appendix III. Knolls Atomic Power Laboratory/Nuclear Fuel Services Brookhaven National Lab/ Brookhaven Science Associates Oak Ridge National Laboratory/ University of Tennessee/Battelle West Valley/West Valley Nuclear Services Remaining generator sites (27) In addition to the individual named above, Daniel Feehan, Doreen Feldman, Thomas Kingham, Mehrzad Nadji, Omari Norman, Christopher Pacheco, Judy Pagano, Carol Herrnstadt Shulman, and Peter Zwanzig made key contributions to this report.
In 2004, the Department of Energy (DOE) disposed of more than 378,000 cubic meters of low-level radioactive waste (LLRW)--contaminated building rubble, soil, and debris. In 2002, DOE directed its sites to use life-cycle cost analysis to manage LLRW. Life-cycle cost analysis examines the total cost of various options to manage LLRW over its life, including its packaging, treatment, transport, and disposal, to identify the lowest-cost alternative. GAO determined whether (1) DOE sites use life-cycle cost analysis to evaluate LLRW management alternatives and (2) DOE has a strategy for cost-effectively managing LLRW departmentwide, including state actions that may affect this strategy. The six DOE sites we visited, representing more than 70 percent of the LLRW disposed of by DOE during 2003 and 2004, did not consistently use life-cycle cost analysis because of weak DOE guidance and a lack of oversight of contractors' implementation of this guidance. As a result, DOE cannot ensure that lowest-cost LLRW management alternatives are identified, so that managers make decisions that fully weigh costs against noncost factors, such as safety and schedule. For example, DOE contractors at two sites did not consistently consider alternative transportation modes or postclosure maintenance and surveillance costs of disposal sites in their analyses for fiscal year 2004 disposal decisions. GAO also could not always determine how contractors used cost analyses in disposal decisions because of incomplete documentation. While DOE's guidance requires each site to develop the mechanisms necessary to ensure use of life-cycle cost analysis, it does not specify, for example, (1) a systematic, consistent method of analyzing all cost elements to determine the lowest cost, or (2) when analyses should be performed. Also, no such guidance was incorporated into site contracts, and DOE site offices had not evaluated contractors' use of life-cycle cost analysis. DOE has recognized that its current approach--having each site responsible for developing mechanisms necessary to control costs--may result in cost inefficiencies and may limit its ability to meet departmentwide strategic objectives. As a result, DOE plans to begin implementing a national LLRW disposition strategy by March 2006 to better coordinate disposal efforts--specific schedules have not yet been established for when the strategy will be fully in place. However, DOE faces challenges in developing and implementing this strategy. First, it needs to gather complete data on the amount of LLRW needing disposal. Second, the fact that DOE's multiple program and site offices have differing missions and oversee many contractors presents coordination challenges. For example, one program office dismantled and disposed of a supercompactor used to reduce the volume of large LLRW items without a DOE-wide assessment of LLRW compacting needs and without considering other potential cost-effective uses for the supercompactor that might benefit other DOE sites. Third, DOE faces state actions that have restricted access to disposal facilities, making it more difficult to coordinate and integrate disposal departmentwide.
Missile defense is important because at least 25 countries now possess or are acquiring sophisticated missile technology that could be used to attack the United States, deployed troops, friends, and allies. MDA’s mission is to develop and field an integrated, layered BMDS capable of defending against enemy ballistic missiles launched from all ranges and during all phases of the missiles’ flight. DOD has spent and continues to spend large sums of money to defend against this threat. Since the mid-1980s, about $107 billion has been spent, and over the next 5 years, another $49 billion is expected to be invested. While the initial set of BMDS assets was fielded during 2004-2005, much of the technical and engineering foundation was laid by this prior investment. DOD also expects to continue investing in missile defense for many more years as the system evolves into one that can engage an enemy ballistic missile launched from any range during any phase of the missile’s flight. To enable MDA to field and enhance a missile defense system quickly, the Secretary of Defense, in 2002, directed a new acquisition strategy. The Secretary’s strategy included removing the BMDS program from DOD’s traditional acquisition process until a mature capability was ready to be handed over to a military service for production and operation. Therefore, development of the BMDS program is not segmented into concept refinement, technology development, and system development and demonstration phases, as other major defense acquisition programs are. Instead, MDA initiates one development phase that incorporates all acquisition activities and that is known simply as research and development. MDA also has approval to use research and development funds, rather than procurement funds, to acquire assets that could be made available for operational use. To carry out its mission, MDA is fielding missile defense capabilities in 2-year increments known as blocks. The first block—Block 2004—fielded a limited initial capability that included early versions of GMD, Aegis BMD, PAC-3, and C2BMC. This was the capability that was put on alert status in 2006. MDA formally began a second BMDS block on January 1, 2006, that will continue through December 31, 2007. This block is expected to provide protection against attacks from North Korea and the Middle East. During the 2-year block timeframe, MDA is focusing its program of work on the enhancement and fielding of additional quantities of the GMD, Aegis BMD, and C2BMC elements, as well as fielding a Forward-Based X- Band radar that is part of the Sensors element. When MDA defined the block in March 2005, shortly after submitting its fiscal year 2006 budget request to Congress, it also included three other elements—Airborne Laser (ABL), Space Tracking and Surveillance System (STSS), and Terminal High Altitude Area Defense (THAAD)—that are primarily developmental in nature. According to MDA, these elements were included in the block even though they were not expected to be operational until future blocks because the elements offered some emergency capability during the block timeframe. In March 2006, MDA removed THAAD from Block 2006. According to MDA, this action better aligned resources and fielding plans. The development of two other elements—Multiple Kill Vehicle (MKV) and Kinetic Energy Interceptor (KEI)—also continued in fiscal year 2006, but these elements were not considered part of Block 2006 because, according to MDA officials, the elements provide no capability—emergency or operational—during the block. The bulk of the funding that MDA requests for the BMDS each fiscal year is for the development, fielding, and sustainment of BMDS elements. For example, in fiscal year 2006, funding for the nine BMDS elements collectively accounted for 72 percent of MDA’s research and development budget. MDA requests funds for each of these elements, with the exception of C2BMC and THAAD, under separate budget line items. In addition, MDA issues separate contracts for each of the nine elements. Prior to beginning each new block, MDA establishes and submits block goals to Congress. These goals present the business case for the new block. MDA presented its Block 2006 goals to Congress in March 2005, shortly after submitting its fiscal year 2006 budget. At that time, MDA told Congress that the agency expected to field the following assets: up to 15 GMD interceptors, an interim upgrade of the Thule Early Warning Radar, a Forward-Based X-Band radar, 19 Aegis BMD missiles, 1 new Aegis cruiser for the missile defense mission, 4 new Aegis destroyers capable of providing long-range surveillance and tracking, and 8 Aegis destroyers upgraded for the engagement mission. MDA’s cost goal for the development of the six elements that compose the block, the manufacture of assets being fielded, and logistical support for fielded assets was $19.3 billion. MDA also notified Congress of the Block 2006 performance goals established for the BMDS. These goals were composed of numerical values for the probability of engagement success, the land area from which the BMDS could deny a launch, and the land area that the BMDS could defend. Fiscal year testing goals were also established by element program offices, but these goals were not formally reported to Congress. We examined numerous documents and held discussions with agency officials. In determining the elements’ progress toward Block 2006 goals, we looked at the accomplishments of six BMDS elements—ABL, Aegis BMD, BMDS Sensors, C2BMC, GMD, and STSS—that compose the Block 2006 configuration. Our work included examining System Element Reviews, test plans and reports, production plans, and Contract Performance Reports. We also interviewed officials within each element program office and within MDA functional offices. In assessing whether MDA’s flexibility impacts BMDS oversight and accountability, we examined documents such as those defining MDA’s changes to Block 2006 goals, acquisition laws for major DOD programs, and BMDS policy directives issued by the Secretary of Defense. We examined the current status of MDA’s quality assurance program by visiting various contractor facilities and holding discussions with MDA officials, such as officials in the Office of Quality, Safety, and Mission Assurance. We performed our work from June 2006 through March 2007 in accordance with generally accepted government auditing standards. MDA made progress during fiscal year 2006, but it will not achieve the goals it set for itself in March 2005. One year after establishing its Block 2006 goals, the agency informed Congress that it planned to field fewer assets, reduce performance goals, and increase the block’s cost goal. It is also likely that in addition to fielding fewer assets, other Block 2006 work will be deferred to offset growing contractor costs. MDA is generally on track to meet its revised quantity goals, but the performance of the BMDS cannot yet be fully assessed because there have been too few flight tests conducted to anchor the models and simulations that predict overall system performance. Several elements continue to experience technical problems that pose questions about the performance of the fielded system and could delay the enhancement of future blocks. In addition, the Block 2006 cost goal cannot be reconciled with actual costs because work travels to and from other blocks and individual element program offices report costs inconsistently. During the first year of Block 2006, MDA continued to improve the BMDS by enhancing its performance and fielding additional assets. In addition, the BMDS elements achieved some notable test results. For example, the GMD element completed its first successful intercept attempt since 2002. The test was also notable because it was an end-to-end test of one engagement scenario, the first such test that the program has conducted. Also, the Aegis BMD element conducted a successful intercept test of its more capable Standard Missile-3 design that is being fielded for the first time during Block 2006. In March 2006, soon after the formal initiation of Block 2006, MDA announced that events such as hardware delays, technical challenges, and budget cuts were causing the agency to field fewer assets than originally expected. MDA’s goal now calls for fielding 3 fewer GMD interceptors; deferring the upgrade of the Thule radar until Block 2008, when it can be fully upgraded; producing 4 fewer Aegis BMD missiles; upgrading 1 less Aegis destroyer for the engagement mission; and delivering 3 C2BMC Web browsers rather than the more expensive C2BMC suites. With the exception of the GMD interceptors, MDA is on track to deliver the revised quantities. The GMD program planned to emplace 8 interceptors during calendar year 2006, but was only able to emplace 4. Program officials told us that the contractor has increased the number of shifts that it is working and that this change will accelerate deliveries. However, to meet its quantity goal, the GMD program will have to more than double its interceptor emplacement rate in 2007. MDA also reduced the performance expected of Block 2006 commensurate with the reduction in assets. However, insufficient data are available to determine whether MDA is on track to meet the new goal. Although the GMD test program has achieved some notable results, officials in DOD’s Office of the Director of Operational Test and Evaluation told us that the element has not completed sufficient tests to provide a high level of confidence that the BMDS can reliably intercept intercontinental ballistic missiles. Further testing is needed as well to confirm that GMD can use long-range tracking data developed by Aegis BMD to prepare—in real time—a weapon system task plan for GMD interceptors. Delayed testing and technical problems may also impact the performance of the current and future configurations of the BMDS. For example, the performance of the Block 2006 configuration of the Aegis BMD missile is unproven because design changes in the missile’s solid attitude and divert system and one burn pattern of the third stage rocket motor were not flight-tested before they were cut into the production line. The current configuration of the GMD interceptor also continues to struggle with an anomaly that has occurred in each of the element’s flight tests. The anomaly has not yet prevented the program from achieving its primary test objectives, but neither its source nor a solution has been clearly identified or defined. The reliability of some GMD interceptors remains uncertain as well because inadequate mission assurance/quality control procedures may have allowed less reliable or inappropriate parts to be incorporated into the manufacturing process. Program officials plan to introduce new parts into the manufacturing process, but not until interceptor 18. MDA also plans to retrofit the previous 17 interceptors, but not until fiscal year 2009. In addition to the performance problems with elements being fielded, the ABL element that is being developed to enhance a future BMDS configuration experienced technical problems with its Beam Control/Fire Control component. These problems have delayed a lethality demonstration that is needed to demonstrate the element’s leading-edge technologies. ABL is an important element because if it works as desired, it will defeat enemy missiles soon after launch, before decoys are released to confuse other BMDS elements. MDA plans to decide in 2009 whether ABL or KEI, whose primary boost phase role is to mitigate the risk in the ABL program, will become the BMDS boost phase capability. While MDA reduced Block 2006 quantity and performance goals, it increased the block’s cost goal from about $19.3 billion to approximately $20.3 billion. The cost increases were caused by the addition of previously unknown operations and sustainment requirements, realignment of the GMD program to support a successful return to flight, realignment of the Aegis BMD program to address technical challenges and invest in upgrades, and preparations for round-the-clock operation of the BMDS. Although MDA is expected to operate within its revised budget of $20.3 billion, the actual cost of the block cannot be reconciled with the cost goal. To stay within its Block 2004 budget, MDA shifted some of that block’s work to Block 2006 and is counting it as a cost of Block 2006, which overstates Block 2006 cost. In addition, MDA officials told us that it is likely that some Block 2006 work will be deferred until Block 2008 to cover the $478 million fiscal year 2006 budget overruns experienced by five of the six element prime contractors. If MDA reports the cost of deferred work as it has in the past, the actual cost of Block 2006 will be complicated further. Another factor complicating the reconciliation of Block 2006 cost is that the elements report block cost inconsistently. Some elements appropriately include costs that the program will incur to reach full capability, while others do not. Because the BMDS has not formally entered the system development and demonstration phase of the acquisition cycle, it is not yet required to apply several important oversight mechanisms contained in certain acquisition laws that, among other things, provide transparency into program progress and decisions. This has enabled MDA to be agile in decision making and has facilitated fielding an initial BMDS capability quickly. On the other hand, MDA operates with considerable autonomy to change goals and plans, making it difficult to reconcile outcomes with original expectations and to determine the actual cost of each block and of individual operational assets. Over the years, a framework of laws has been created that make major defense acquisition programs accountable for their planned outcomes and cost, give decision makers a means to conduct oversight, and ensure some level of independent program review. The application of many of these laws is triggered by a program’s entry into system development and demonstration. To provide accountability, once major defense programs cross this threshold, they are required by statute to document program goals in an acquisition program baseline that as implemented by DOD has been approved by a higher-level DOD official prior to the program’s initiation. The baseline provides decision makers with the program’s best estimate of the program’s total cost for an increment of work, average unit costs for assets to be delivered, the date that an operational capability will be fielded, and the weapon’s intended performance parameters. Once approved, major acquisition programs are required to measure their program against the baseline, which is the program’s initial business case, or obtain the approval of a higher-level acquisition executive before making significant changes. Programs are also required to regularly provide detailed program status information to Congress, including information on cost, in Selected Acquisition Reports. In addition, Congress has established a cost-monitoring mechanism that requires programs to report significant increases in unit cost measured from the program baseline. Other statutes provide for independent program verifications and place limits on the use of appropriations. For example, 10 U.S.C. § 2434 prohibits the Secretary of Defense from approving system development and demonstration unless an independent estimate of the program’s life-cycle cost has been conducted by the Secretary. In addition, 10 U.S.C. § 2399 requires completion of initial operational test and evaluation before a program can begin full-rate production. These statutes ensure that someone external to the program examines the likelihood that the program can be executed as planned and will yield a system that is effective and suitable for combat. The use of an appropriation is also controlled so that it will not be used for a purpose other than the one for which it was made, except as otherwise provided by law. Research and development appropriations are typically specified by Congress to be used to pay the expenses of basic and applied scientific research, development, test, and evaluation. On the other hand, procurement appropriations are, in general, to be used for production and manufacturing. In the 1950s, Congress established a policy that items being purchased with procurement funds be fully funded in the year that the item is procured. This is meant to prevent a program from incrementally funding the purchase of operational systems. Full funding ensures that the total procurement costs of weapons and equipment are known to Congress up front and that one Congress does not put the burden on future Congresses of deciding whether they should appropriate additional funds or expose weapons under construction to uneconomic start-up and stop costs. The flexibility to defer application of specific acquisition laws has benefits. MDA can make decisions faster than other major acquisition programs because it does not have to wait for higher-level approvals or independent reviews. MDA’s ability to quickly field a missile defense capability is also improved because assets can be fielded before all testing is complete. MDA considers the assets it has fielded to be developmental assets and not the result of the production phase of the acquisition cycle. Additionally, MDA enjoys greater flexibility than other programs in the use of its funds. Because MDA uses research and development funds to manufacture assets, it is not required to fully fund those assets in the year of their purchase. Therefore, as long as its annual budget remains fairly level, MDA can request funds to address other needs. On the other hand, the flexibilities granted MDA make it more difficult to conduct program oversight or to hold MDA accountable for the large investment being made in the BMDS program. Block goals can be changed by MDA, softening the baseline used to assess progress toward expected outcomes. Similarly, because MDA can redefine the work to be completed during a block, the actual cost of a block cannot be compared with the original cost estimate. MDA considers the cost of deferred work, which may be the delayed delivery of assets or other work activities, as a cost of the block in which the work is performed even though the work benefits or was planned for a prior block. Further, MDA does not track the cost of the deferred work and, therefore, cannot make adjustments that would match the cost with the block that is benefited. For example, during Block 2004, MDA deferred some planned development, deployment, characterization, and verification activities until Block 2006 so that it could cover contractor budget overruns. The costs of the activities are now considered part of the cost of Block 2006. Also, although Congress provided funding for these activities during Block 2004, MDA used these funds for the overruns and will need additional funds during Block 2006 to cover their cost. Planned and actual unit costs of fielded assets are equally difficult to reconcile. Because MDA is not required to develop an approved acquisition program baseline, it is not required to report the expected average unit cost of assets. Also, because MDA is not required to report significant increases in unit cost, it is not easy to determine whether an asset’s actual cost has increased significantly from its expected cost. Finally, using research and development funds to purchase fielded assets further reduces cost transparency because these dollars are not covered by the full-funding policy as are procurement funds. Therefore, when a program for a 2-year block is first presented in the budget, Congress is not necessarily fully aware of the dimensions and cost of that block. For example, although a block may call for the delivery of a specific number of interceptors, the full cost of those interceptors is requested over 3 to 5 years. Calculating unit costs from budget documents is difficult because the cost of components that will become fielded assets may be spread across 3 to 5 budget years—a consequence of incremental funding. During Block 2004, poor quality control procedures caused the missile defense program to experience test failures and slowed production. MDA has initiated a number of actions to correct quality control weaknesses, and the agency reports that these actions have been largely successful. Although MDA continues to identify quality assurance procedures that need strengthening, recent audits by MDA’s Office of Quality, Safety, and Mission Assurance show such improvements as increased on-time deliveries, reduced test failures, and sustained improvement in product quality. MDA has taken a number of steps to improve quality assurance. These include developing a teaming approach to restore the reliability of key suppliers, conducting regular quality inspections to quickly identify and find resolutions for quality problems, adjusting award fee plans to encourage contractors to maintain a good quality assurance program and encourage industry best practices, as well as placing MDA-developed assurance provisions on prime contracts. For example, as early as 2003, MDA made a critical assessment of a key supplier’s organization and determined that the supplier’s manufacturing processes lacked discipline, its corrective action procedures were ineffective, its technical data package was inadequate, and personnel were not properly trained. The supplier responded by hiring a Quality Assurance Director, five quality assurance professionals, a training manager, and a scheduler. In addition, the supplier installed an electronic problem-reporting database, formed new boards—such as a failure review board—established a new configuration management system, and ensured that manufacturing activity was consistent with contract requirements. During different time periods between March 2004 and August 2006, MDA measured the results of the supplier’s efforts and found a 64 percent decrease in open quality control issues, a 43 percent decline in test failures, and a 9 percent increase in on-time deliveries. MDA expanded its teaming approach in 2006 to another problem supplier and reports that many systemic solutions are already underway. During fiscal year 2006, MDA’s audits continued to identify both quality control weaknesses and quality control procedures that contractors are addressing. During 2006, the agency audited six contractors and identified 372 deficiencies and observations. As of December 2006, the six contractors had collectively closed 157, or 42 percent, of the 372 audit findings. MDA also reported other signs of positive results. For example, in 2006, MDA conducted a follow-on audit of Raytheon, the subcontractor for GMD’s exoatmospheric kill vehicle. A 2005 audit of Raytheon had found that the subcontractor was not correctly communicating essential kill vehicle requirements to suppliers, did not exercise good configuration control, and could not build a consistent and reliable product. The 2006 audit was more positive, reporting less variability in Raytheon’s production processes, increasing stability in its statistical process control data, fewer test problem reports and product waivers, and sustained improvement in product quality. In our March 15, 2007, report, we made several recommendations to DOD to increase transparency in the missile defense program. These included: Develop a firm cost, schedule, and performance baseline for those elements considered far enough along to be in system development and demonstration, and report against that baseline. Propose an approach for those same elements that provides information consistent with the acquisition laws that govern baselines and unit cost reporting, independent cost estimates, and operational test and evaluation for major DOD programs. Such an approach could provide necessary information while preserving the MDA Director’s flexibility to make decisions. Include in blocks only those elements that will field capabilities during the block period and develop a firm cost, schedule, and performance baseline for that block capability, including the unit cost of its assets. Request and use procurement funds, rather than research, development, test, and evaluation funds, to acquire fielded assets. DOD partially agreed with the first three recommendations and recognized the need for greater program transparency. It committed to provide information consistent with the acquisition laws that govern baselines and unit cost reporting, independent cost estimates, and operational test and evaluation. DOD did not agree to use elements as a basis for this reporting, expressing its concern that an element-centric approach to reporting would have a fragmenting effect on the development of an integrated system. We respect the need for the MDA Director to make decisions across element lines to preserve the integrity of the system of systems. We recognize that there are other bases rather than elements for reporting purposes. However, we believe it is essential that MDA report in the same way that it requests funds. Currently MDA requests funds and contracts by element, and at this time, that appears to be the most logical way to report. MDA currently intends to modify its current block approach. We believe that a management construct like a block is needed to provide the vehicle for making system-of-system decisions and to provide for system-wide testing. However, at this point, the individual assets to be managed in a block—including quantities, cost, and delivery schedules—can only be derived from the individual elements. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or members of the subcommittee may have. For future questions about this statement, please contact me at (202) 512- 4841 or francisp@gao.gov. Individuals making key contributions to this statement include Barbara H. Haynes, Assistant Director; LaTonya D. Miller; Michael J. Hesse; Letisha T. Jenkins; Sigrid L. McGinty; Kenneth E. Patton; and Steven B. Stern. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the next 5 years the Missile Defense Agency (MDA) expects to invest $49 billion in the Ballistic Missile Defense (BMD) system's development and fielding. MDA's strategy is to field new capabilities in 2-year blocks. In January 2006, MDA initiated its second block--Block 2006--to protect against attacks from North Korea and the Middle East. Congress requires GAO to assess MDA's progress annually. GAO's March 2007 report addressed MDA's progress during fiscal year 2006 and followed up on program oversight issues and the current status of MDA's quality assurance program. GAO assessed the progress of each element being developed by MDA, examined acquisition laws applicable to major acquisition programs, and reviewed the impact of implemented quality initiatives. During fiscal year 2006, MDA fielded additional assets for the Ballistic Missile Defense System (BMDS), enhanced the capability of some assets, and realized several noteworthy testing achievements. For example, the Ground-based Midcourse Defense (GMD) element successfully conducted its first end-to-end test of one engagement scenario, the element's first successful intercept test since 2002. However, MDA will not meet its original Block 2006 cost, fielding, or performance goals because the agency has revised those goals. In March 2006, MDA: reduced its goal for fielded assets to provide funds for technical problems and new and increased operations and sustainment requirements; increased its cost goal by about $1 billion--from $19.3 to $20.3 billion; and reduced its performance goal commensurate with the reduction of assets. MDA may also reduce the scope of the block further by deferring other work until a future block because four elements incurred about $478 million in fiscal year 2006 budget overruns. With the possible exception of GMD interceptors, MDA is generally on track to meet its revised quantity goals. But the deferral of work, both into and out of Block 2006, and inconsistent reporting of costs by some BMDS elements, makes the actual cost of Block 2006 difficult to determine. In addition, GAO cannot assess whether the block will meet its revised performance goals until MDA's models and simulations are anchored by sufficient flight tests to have confidence that predictions of performance are reliable. Because MDA has not formally entered the Department of Defense (DOD) acquisition cycle, it is not yet required to apply certain laws intended to hold major defense acquisition programs accountable for their planned outcomes and cost, give decision makers a means to conduct oversight, and ensure some level of independent program review. MDA is more agile in its decision-making because it does not have to wait for outside reviews or obtain higher-level approvals of its goals or changes to those goals. Because MDA can revise its baseline, it has the ability to field fewer assets than planned, defer work to a future block, and increase planned cost. All of this makes it hard to reconcile cost and outcomes against original goals and to determine the value of the work accomplished. Also, using research and development funds to purchase operational assets allows costs to be spread over 2 or more years, which makes costs harder to track and commits future budgets. MDA continues to identify quality assurance weaknesses, but the agency's corrective measures are beginning to produce results. Quality deficiencies are declining as MDA implements corrective actions, such as a teaming approach designed to restore the reliability of key suppliers.
In general, employee misclassification occurs when an employer improperly classifies a worker as an independent contractor instead of an employee. As we reported in 2006, the tests used to determine whether a worker is an independent contractor or an employee are complex and differ from law to law. While laws vary in their definitions of the conditions that make a worker an employee, in general, a person is considered an employee if he or she is subject to another’s right to control the manner and means of performing the work. In contrast, independent contractors are individuals who obtain customers on their own to provide services (and who may have other employees working for them) and who are not subject to control over the manner by which they perform their services. Many independent contractors are classified properly, and the independent contractor relationship can offer advantages to both businesses and workers. Businesses may choose to hire independent contractors for reasons such as being able to easily expand or contract their workforces to accommodate workload fluctuations or fill temporary absences. Workers may choose to become independent contractors to have greater control over their work schedules or when they pay taxes, rather than have employers withhold taxes from their paychecks. However, employers have financial incentives to misclassify employees as independent contractors. While employers are generally responsible for matching the Social Security and Medicare tax payments their employees make and paying all federal unemployment taxes and a portion of or all state unemployment taxes, independent contractors are generally responsible for paying their own Social Security and Medicare tax liabilities and do not pay unemployment taxes because they are not eligible to receive unemployment insurance benefits. In addition, businesses generally are not required to withhold the income, Social Security, or Medicare taxes from payments made to independent contractors that they are required to withhold for their employees. Independent contractors may also be responsible for making their own workers’ compensation payments, depending on their state program. The differences, in general terms, between the tax responsibilities of employees and independent contractors are summarized in table 1. While businesses may be confused about how to properly classify workers, some employers may misclassify employees to circumvent laws that restrict employers’ hiring, retention, and other labor practices, and to avoid providing numerous rights and privileges provided to employees by federal workforce protection laws. These laws include FLSA, which establishes minimum wage, overtime, and child labor the Americans with Disabilities Act of 1990 and the Age Discrimination in Employment Act of 1967, which protect employees from discrimination based on disability or age; the Family and Medical Leave Act of 1993, which provides various protections for employees who need time off from their jobs because of medical problems or the birth or adoption of a child; and the National Labor Relations Act, which guarantees the right of employees to organize and bargain collectively. Employers may also choose to misclassify their employees in order to avoid having to obtain proof that workers are U.S. citizens or obtain work visas for them. In addition, independent contractors generally do not qualify to participate in health and pension plans that employers may offer to employees. Finally, when employers misclassify employees, they may be able to undercut competitors because their costs are reduced. While some workers may agree to be misclassified as independent contractors in order to be paid in cash, avoid withholding of taxes, or prevent having to provide proof of their immigration status, other workers may not realize that they have been misclassified. In addition, they may not realize that as independent contractors, they are not protected under many laws designed to protect employees, and that they have obligations for which employees are not responsible, such as payment of their own taxes over the course of the year. Responsibility for enforcing laws that afford employee protections and administering programs that can be affected by employee misclassification issues is dispersed among a number of federal and state agencies, as shown in table 2. Misclassification itself is not a violation of any federal labor law, but it can result in violations of federal and state laws. For example, DOL’s Wage and Hour Division (WHD) may cite employers that have misclassified their employees as independent contractors for violations of FLSA relating to recordkeeping (not keeping required records for these employees), nonpayment of the federal minimum wage, and nonpayment of overtime. It also assesses back wages owed to workers in cases where misclassification leads to nonpayment of overtime or minimum wage. IRS can also assess taxes and penalties on employers that it finds have misclassified employees. However, some workers who would otherwise be considered employees are deemed not to be employees for tax purposes. With increased IRS enforcement of the employment tax laws beginning in the late 1960s, controversies developed over whether employers had correctly classified certain workers as independent contractors rather than as employees. In some instances when IRS prevailed in reclassifying workers as employees, the employers became liable for portions of employees’ Social Security and income tax liabilities (that the employers had failed to withhold and remit), although the employees might have fully paid their liabilities for self-employment and income taxes. In response to this problem, Congress enacted section 530 of the Revenue Act of 1978. That provision generally allows employers to treat workers as not being employees for employment tax purposes regardless of the workers’ actual status if the employers meet three tests. The employers must have filed all federal tax returns in a manner consistent with not treating the workers as employees, consistently treated similarly situated workers as independent contractors, and had a reasonable basis for treating the workers as independent contractors. Under section 530, a reasonable basis exists if the employer reasonably relied on (1) past IRS examination practice with respect to the employer, (2) published rulings or judicial precedent, (3) long-standing recognized practices in the industry of which the employer is a member, or (4) any other reasonable basis for treating a worker as an independent contractor. Section 530 also prohibits IRS from issuing regulations or Revenue Rulings with respect to the classification of any individual for the purposes of employment taxes. Congress intended that this moratorium to be temporary until more workable rules were established, but the moratorium continues to this day. The provision was extended indefinitely by the Tax Equity and Fiscal Responsibility Act of 1982. Federal agencies use different tests to determine whether a worker is an independent contractor or an employee. IRS uses the concepts of behavioral control and financial control and the relationship between the employer and the worker to determine whether a worker is an employee, while WHD uses six factors identified by the United States Supreme Court to determine employee status during investigations of FLSA violations. The complexity and variety of worker classification tests may also complicate agencies’ enforcement efforts. In addition, states use varying definitions of employee. For example, according to a report commissioned by DOL, at least 4 states follow IRS’s test, and at least 10 states use their own definitions. The remaining states use various definitions that rely at least in part on whether the employer has the right to control the worker. Decisions regarding employee status are sometimes determined through the courts. For example, in a recent decision, the United States Court of Appeals for the District of Columbia Circuit ruled that drivers for FedEx’s small package delivery unit are independent contractors, and not employees, and therefore do not have the right to bargain collectively. FedEx had sought review of the determination by the National Labor Relations Board that the FedEx drivers were employees and that FedEx had committed an unfair labor practice by refusing to bargain with the union certified as the collective bargaining representative of its Wilmington, Massachusetts drivers. In ruling that the drivers are independent contractors, the court noted that because FedEx Ground drivers can operate multiple routes, hire extra drivers, and sell their routes without company permission, they were not like employees of traditional trucking companies. Legislation aimed at preventing employee misclassification has been introduced in previous sessions of Congress. At least four bills relating to employee misclassification were introduced in the 110th Congress. Two of the bills, both titled the Employee Misclassification Prevention Act (H.R. 6111 and S. 3648), were introduced in the House of Representatives and the Senate, respectively, to amend FLSA to require employers to keep records of independent contractors and to provide a special penalty for misclassification. Two other bills were aimed, in part, at amending the Internal Revenue Code to aid in proper classification. The Independent Contractor Proper Classification Act of 2007 (S. 2044) was introduced in the Senate to provide procedures for the proper classification of employees and independent contractors, including amending the tax code and requiring DOL and IRS to exchange information regarding cases involving employee misclassification. In the House of Representatives, the Taxpayer Responsibility, Accountability, and Consistency Act of 2008 (H.R. 5804) sought to amend the Internal Revenue Code to modify the rules relating to the treatment of individuals as independent contractors or employees, including requiring IRS to inform DOL of cases involving employee misclassification. However, these bills were not enacted into law. Although the national extent of employee misclassification is unknown, earlier national studies and more recent, though not comprehensive, studies suggest that employee misclassification could be a significant problem with adverse consequences. In its last comprehensive estimate of misclassification, for tax year 1984, IRS estimated that nationally about 15 percent of employers misclassified a total of 3.4 million employees as independent contractors, resulting in an estimated revenue loss of $1.6 billion (in 1984 dollars). Nearly 60 percent of the revenue loss was attributable to the misclassified individuals failing to report and pay income taxes on compensation they received as misclassified independent contractors. The remaining revenue loss stemmed from the failure of (1) employers and misclassified independent contractors to pay taxes for Social Security and Medicare and (2) employers to pay federal unemployment taxes. For 84 percent of the workers misclassified as independent contractors in tax year 1984, employers reported the workers’ compensation to IRS and the workers, as required, on the IRS Form 1099-MISC information return. These workers subsequently reported most of their compensation (77 percent) on their tax returns. In contrast, workers misclassified as independent contractors for whom employers did not report compensation on Form 1099-MISC reported only 29 percent of their compensation on their tax returns. Although IRS has not updated the information from its 1984 report, it plans to review the national extent of employee misclassification as part of a broader study of employment tax compliance. However, IRS officials anticipate that the results of this study will not be available until 2013, at the earliest. As part of its National Research Program, IRS plans to examine a randomly selected sample of employers’ tax returns for tax years 2008 to 2010. IRS employment tax officials told us they may need to extend the study if they have not collected sufficient data to provide reliable estimates. For the misclassification part of the employment tax compliance study, they said they hope to estimate the number of employers that misclassify employees, the number of employees who are misclassified, and the resulting loss of tax revenue. The officials also said they are uncertain whether IRS will be able to collect sufficient data to estimate the extent of misclassification within particular industries or geographic regions. A study commissioned by DOL in 2000 found that from 10 percent to 30 percent of firms audited in nine selected states had misclassified employees as independent contractors. The study also estimated that if only 1 percent of all employees were misclassified nationally, the loss in overall unemployment insurance revenue because of employers’ underreporting of unemployment taxes across all states would be nearly $200 million annually. In addition, the Bureau of Labor Statistics periodically conducts a survey of contingent workers (defined as workers holding jobs that are expected to last only a limited period of time), including independent contractors. The most recent survey, conducted in 2005, revealed that 10.3 million U.S. workers were classified as independent contractors—approximately 7.4 percent of all workers. However, the survey did not indicate how many of these workers were misclassified. State officials we interviewed told us that in their opinion, misclassification has generally increased over recent years. State activity in this area may support this view. For example, officials from New Hampshire’s Department of Labor said the agency recently hired four new investigators to focus exclusively on investigations of employee misclassification. Summary data states reported to DOL’s Employment and Training Administration, which oversees state administration of the unemployment insurance program, showed that from 2000 to 2007 the number of misclassified workers uncovered by state audits had increased from approximately 106,000 workers to over 150,000 workers, as shown in figure 1. While these counts reveal an upward trend, they likely undercount the overall number of misclassified employees, since states generally audit less than 2 percent of employers each year. State officials, however, told us that summary data they reported to DOL’s Employment and Training Administration (ETA) did not include all misclassification identified by their investigations. For example, officials from one state said they did not report cases to DOL that did not meet ETA’s prescriptive audit criteria that mandate, among other things, extensive testing of an employer’s payroll records. Furthermore, the official pointed out that the data ETA collects do not include cases involving workers in the underground economy, where workers are paid in cash and income is not reported to states or IRS. Studies conducted by states, universities, and research institutes have been generally limited in scope—for example, confined to one state or a specific industry within a state. However, some of these studies have noted that misclassification is especially prevalent in certain industries, such as construction. For example, a study conducted by Harvard University on the extent of misclassification in the construction industry in Maine estimated that approximately 14 percent of construction firms misclassified at least some of their employees each year from 1999 to 2002. Maine state officials told us that following the study, they began targeting construction firms for their unemployment insurance audits and found higher levels of misclassification—up to 45 percent of the firms audited misclassified at least some of their employees. Misclassification may undermine workers’ access to protections, such as unemployment insurance and workers’ compensation. For example, one group that advocates for workers cited an instance of a construction worker who fell three stories, was severely injured, and incurred hospital expenses of over $10,000 related to the injury. Because the worker was misclassified as an independent contractor, his employer did not provide workers’ compensation coverage for the employee. Several union officials told us that misclassification of workers is especially prevalent in the construction industry where workers are often paid entirely in cash and, as a result, are not noted on the employers’ records at all, either as employees or independent contractors. These officials told us they believe that some employers have been emboldened to begin operating on a cash basis by the ease with which they are able to misclassify their workers. The WHD investigation case files we reviewed provided detail on several instances where misclassified employees did not receive minimum wages or overtime pay. For example, one case involved a medical transcription service that hired workers—whom WHD determined had been misclassified as independent contractors under FLSA—to work out of their homes transcribing medical files they downloaded from the company’s computer system. When the system was not accessible, workers were not paid—although they were required to remain available until the system became operational—and, as a result, they were not paid the minimum wage required by FLSA. DOL’s detection of employee misclassification is generally the indirect result of its investigations of alleged FLSA violations, particularly complaints involving nonpayment of overtime or minimum wages. WHD officials have stated to Congress that the misclassification of an employee as an independent contractor is not itself a violation of FLSA or other laws WHD enforces. Misclassification, however, is often associated with FLSA violations—in particular, recordkeeping violations and the failure to pay overtime or minimum wages. When WHD finds FLSA violations resulting from misclassification, it assesses back wages owed to workers as appropriate. In addition, although there is no penalty for recordkeeping violations, WHD requires businesses to place any workers the employer reclassifies as employees on the company payroll records, as per FLSA rules. Our review of the case files also showed that WHD investigators, in the course of their investigations, did not consistently review documents that could indicate that employees had been misclassified. Specifically, investigators may ask employers about independent contractors or uncover misclassification through worker interviews, according to the information contained in the case files. However, they did not, as a matter of course, review employer records such as IRS Forms 1099-MISC that show payments made to independent contractors. Reviewing these records could aid WHD investigators in identifying workers who have been misclassified. Although one district director told us it is standard practice for investigators in his office to ask for this type of information during an investigation, it is not WHD policy to do so. Many of the experts we interviewed said that targeted investigations of employers or industries could increase the detection of misclassification. Approximately 80 percent of the investigations WHD concluded in 2008 involving misclassification were initiated because of complaints from workers about possible labor violations. However, several experts we spoke with pointed out that some workers, such as immigrants or those in low-wage industries, are often less likely to file complaints with WHD. Thus, a lack of targeted investigations coupled with the reluctance of misclassified workers to complain may result in less effective enforcement of proper classification. WHD officials told us that their ability to conduct targeted investigations in recent years has been limited by reductions in agency resources combined with consistently high levels of worker complaints about possible labor law violations. According to WHD policy, the first priority of the agency’s enforcement is to respond to complaints. WHD conducts few investigations targeted at misclassification, though it has begun to place a greater focus on misclassification within existing agency initiatives. WHD concluded over 24,500 FLSA cases in fiscal year 2008, and misclassification was the primary reason for the violation identified in 131 investigations. Most of these investigations (80 percent) were initiated by complaints from workers rather than being targeted by WHD. In the 26 investigations that were targeted by WHD, the agency identified 341 misclassified employees who were owed back wages of over $88,000. In the 1990s, WHD implemented initiatives to conduct targeted investigations within low-wage industries with a history of FLSA violations, such as restaurants, hotels, and nursing homes. These initiatives enabled WHD to detect employee misclassification to the extent it was prevalent in those industries. WHD officials told us that in fiscal year 2007, in part because of heightened congressional interest in misclassification, they instructed their district directors to place a special emphasis on those low-wage industries within their districts with a history of misclassifying employees. During fiscal year 2009, for example, the New Orleans district office planned to conduct targeted investigations of the staffing and janitorial industries in its region, although it limited this effort to three investigations. Examples of state efforts support the potential effect of targeted investigations aimed at detecting misclassification. New York’s Department of Labor has created a task force that conducts investigations and audits aimed specifically at detecting misclassification. Among other activities, the task force conducts sweeps, or targeted investigations of businesses located within a certain area or within industries where misclassification is prevalent. In conducting investigations during 2007 and 2008 that targeted approximately 300 businesses in the retail and commercial industries, the task force found that 67 percent of the businesses were in violation of unemployment laws, labor standards, or workers’ compensation laws. In addition, at the request of investigators, the task force scheduled follow-up audits of about half of these employers. As of December 2008, it had completed 54 of these audits and found in approximately 70 percent of them that employers had continued to misclassify at least some employees as independent contractors. In addition, the task force conducted targeted investigations of over 600 businesses, primarily in the construction industry. It found labor violations in nearly half of these businesses and ordered follow-up investigations. Just over half of these investigations have been completed, resulting in nearly 7,800 employees being identified as misclassified. The state determined that the misclassification led to $2.2 million in unpaid wages, over $3.5 million in unpaid unemployment taxes and associated penalties, and over $1 million in penalties related to workers’ compensation. As a result of all investigations conducted during a 16- month period ending December 31, 2008, the task force detected 12,300 instances of misclassification, with approximately $12 million in associated unpaid wages. In contrast, in fiscal year 2008, WHD identified 1,619 instances of misclassification nationwide during its investigations and assessed about $1 million in unpaid wages. DOL has begun to track cases of misclassification in its WHD investigations database. However, although DOL’s Occupational Safety and Health Administration (OSHA) may identify misclassification during its safety and health inspections, it does not record this information in its inspections database. In addition, in their responses to our survey, a majority of state workforce agencies noted that their states collect data on the occurrences of misclassification, but most of those states do not send this information to DOL. For example, an official in one state agency told us that in 2008 his state conducted investigations that led to the detection of approximately 46,000 instances of misclassification, but that DOL collected no information associated with those cases. Since this information would likely include the names of employers that misclassified their employees, and the industries involved, collecting it could enable DOL to focus its investigations more effectively on certain employers or industries with a known history of misclassification. Although education and outreach to workers could help reduce the incidence of misclassification, DOL’s work in this area is limited. The DOL Web site contains publications on the employment relationship under FLSA, some of which mention the use of independent contractors. However, the Web site does not provide material that focuses specifically on the subject of employee misclassification. In addition to publications, the DOL Web site provides printable workplace posters, some of which employers are required to display in their workplaces. However, none of WHD’s posters contain information on employment relationships or misclassification. DOL employees sometimes hand out to workers pamphlets that contain general information on workers’ rights. Also, DOL staff provides information materials at seminars and training sessions for employers. While these materials address what constitutes an employment relationship, they do not specifically mention misclassification. Similarly, WHD district directors we interviewed told us that their staffs do not conduct employer and worker outreach activities specifically on misclassification. However, some said their staffs may provide information about misclassification when answering questions from employers or workers. Finally, an OSHA official told us that the agency does not conduct any outreach or education directly related to misclassification, although officials in one region told us that workers were misclassified as independent contractors at over 80 percent of the construction sites they inspected. According to our survey, few states regard DOL’s efforts to educate workers and employers on employee misclassification to be effective. In fact, 16 states had no awareness of DOL education or outreach on the subject. Of the states that were aware of DOL’s outreach activities, only 5 reported that they thought outreach for workers was effective, and only 6 stated that it was effective for employers. Further, some experts we interviewed also expressed the view that DOL’s education and outreach efforts on misclassification are inadequate and that improvement is needed, especially for vulnerable populations. For example, some noted that immigrants are less likely to know their rights and are more likely to be misclassified than other types of workers. WHD district directors we interviewed noted that there are challenges associated with reaching vulnerable populations, such as immigrant workers. Some noted that many noncitizens, whether documented or not, are wary of government and therefore reluctant to approach DOL officials or attend DOL-sponsored events. Despite this challenge, the directors told us that their offices coordinate with immigrant population communities in order to educate workers on labor issues. For instance, staff from the Boston and New Orleans district offices told us they participate in presentations, information sessions, and forums with the Hispanic communities in their districts in coordination with the Mexican consulates. These activities are generally broad in scope but may include specific information on misclassification. When WHD identifies misclassification, the division does not use all available remedies—such as assessing financial penalties, pursuing back wages owed to workers who have been misclassified, and conducting follow-up investigations of employers that have misclassified workers—to penalize employers who have violated FLSA and help ensure future compliance. WHD levied penalties in less than 2 percent of the cases involving misclassification it completed in fiscal year 2008—2 of 131 investigations. In contrast, the division levied penalties in 6 percent of the cases involving FLSA violations from 2000 to 2007. WHD can only levy penalties for violations of the minimum wage or overtime pay provisions of FLSA when the violations are willful or repeated, though a WHD district director noted that it can be difficult to prove that employers are willfully misclassifying employees. In addition, although WHD determined that there were back wages to be paid in most of these cases, we found that investigators did not always follow up to ensure that employees were paid the back wages assessed. For example, in one case we reviewed, the employer did not provide documented proof that she paid back wages of over $5,000 owed to her employees, but WHD closed the case and recorded the back wages as paid. Further, WHD officials told us that if the division uncovers violations caused by misclassification, it does not generally conduct follow-up investigations to ensure that the employees are properly classified. IRS’s misclassification enforcement strategy relies on identifying and examining employers that have potentially misclassified employees. IRS primarily identifies employers to examine for potential misclassification through four sources: The Determination of Worker Status (Form SS-8) Program, in which workers or employers request that IRS determine whether a specific worker is an employee or an independent contractor for purposes of federal employment tax and income tax withholding through the submission of Form SS-8. IRS examines some of the employers it determines to have misclassified workers through the SS-8 program. The Employment Tax Examination Program (ETEP), in which IRS uses specific criteria to identify for examination employers that have a high likelihood of having misclassified employees. General employment tax examinations, meaning examinations of tax returns that are started because of separate employment tax issue that lead to examinations of classification issues. The Questionable Employment Tax Practices (QETP) program, through which IRS and states share information on worker classification-related examinations and other questionable employment tax issues. IRS examines some employers that states have determined to have misclassified employees. IRS’s Small Business/Self Employed Division (SB/SE) conducts the majority of IRS’s misclassification-related examinations. It made applicable assessments (taxes and penalties) in 71 percent of such examinations that it closed during fiscal year 2008, resulting in a total of almost $64 million in assessments, as shown in table 3. A description of the four programs though which IRS primarily generates misclassification- related examinations follows table 3. Also following table 3 is a description of IRS’s Classification Settlement Program (CSP), which enables qualifying employers under examination for misclassification- related issues to lower their misclassification-related tax liabilities if they agree to properly classify their workers in the future. Through its SS-8 program, IRS provides workers or employers that file Forms SS-8 with its determination on the correct classification of the workers in question. IRS also uses the program to identify employers that may have misclassified employees and therefore would be fruitful to examine. In fiscal year 2008, 72 percent of all Form SS-8 requests filed resulted in IRS determinations that the workers in question were employees, 25 percent were closed without any advice given, and 3 percent resulted in determinations that the workers in question were independent contractors or had other results. IRS’s SS-8 unit makes these determinations, in part, using information workers or employers provide on Forms SS-8. After making classification determinations, IRS sends letters to employers to provide them with guidance on how to voluntarily amend their tax returns to comply with the determinations. IRS’s SS-8 unit then uses specific criteria to determine which cases it should refer for examination, including the amount of compensation the worker in question earned, the number of similar workers hired by the employer, and whether the case likely involves fraud. The majority of employers the SS-8 unit determined to have misclassified employees are very small businesses, which generally are not referred because examining such businesses is generally not cost effective. As a result, IRS officials estimated that for recent tax years, only an average of 2 percent to 3 percent of employers it identified to have misclassified employees through SS-8 determinations were referred for examination, and an even smaller percentage resulted in examinations. For ETEP, IRS uses a computer matching program to identify annually employers that potentially misclassified employees. The match criteria include employers that reported paying compensation to workers (on Form 1099-MISC), the amount of compensation the workers reported on their tax returns, and the portion of the workers’ total income that was paid by the employers. IRS uses these criteria to identify employers to examine with the greatest potential for tax assessments. IRS officials told us that generally IRS examines about 1 percent to 3 percent of the employers it identifies annually through ETEP to have potentially misclassified employees. IRS does not examine some employers that it determines based on the ETEP match to have potentially misclassified employees, such as those that no longer appear to be in business; appear to have legitimate reasons for meeting the ETEP selection criteria, such as employers who compensate real estate agents, who are statutorily defined as independent contractors; or are protected by section 530. For tax year 2006, IRS identified over 33,000 employers through ETEP. In fiscal year 2008, IRS examined 221 employers it identified through ETEP, as reflected in table 3. Over half (58 percent) of the misclassification-related examinations of employers that SB/SE conducted in fiscal year 2008 arose through the course of IRS examining employers for other types of employment tax noncompliance. IRS examiners in all divisions are trained about misclassification issues, but the depth of training depends upon the division and group in which the examiners work. According to IRS employment tax officials, QETP, initiated in December 2007, has proven to be a useful source of timely leads on potential misclassification cases. QETP is a collaborative initiative between IRS and, currently, 34 participating states through which IRS and state workforce agencies share information on misclassification examinations. IRS employment tax officials told us that the examination information that states provide through QETP is especially useful to the agency because it is timely, making it easier for IRS to contact and collect money from noncompliant employers. In addition to its programs that generate misclassification examinations, IRS uses CSP to offer settlements to employers that it is examining for misclassification. Through CSP, which IRS initiated in 1996, employers under examination that meet certain criteria can lower their misclassification-related assessments if they agree to correctly classify their workers in the future and pay proper employment taxes. As of November 2008, IRS had entered into about 2,800 settlement agreements, of which about 2,500 involved SB/SE. Employment tax officials in this IRS division estimated that their CSP agreements signed through the end of 2006 have resulted in at least approximately $76 million in taxes voluntarily reported by participating employers without further IRS intervention. Of employers that entered into agreements through the end of 2006, IRS determined that 64 percent appear to be in compliance with their agreements. IRS has not been able to determine, through a review of filing histories, whether the remaining 36 percent of employers have complied with their CSP agreements. IRS would need to examine these employers to determine if they are in compliance with their agreements. IRS provides extensive general information on its Web site on worker classification issues for employers and workers, including flyers, IRS forms, fact sheets, a Web cast, and a training manual providing in-depth information on how IRS examiners determine a worker’s correct classification. IRS also held a national phone forum on worker classification determinations in May 2009 targeted at tax professionals and small business employers and organizations. IRS officials noted that a key IRS worker classification Web page was recently linked to IRS’s main page and was viewed nearly 800,000 times in fiscal year 2008. IRS’s outreach strategies include the use of handouts, e-mail lists, and industry newsletters. In 2008, IRS began conducting worker classification workshops. IRS employment tax officials said that IRS targets these workshops toward persons working as payroll professionals, who are most likely to handle workers’ pay paperwork, and paid tax return preparers. IRS does not generally conduct outreach on classification issues for workers. IRS’s programs aimed at enforcing proper worker classification and educating taxpayers about this issue face three main challenges. First, because misclassification is a complex issue, addressing proper classification can be labor intensive for the IRS officials involved. For example, in determining whether workers are employees or independent contractors, IRS examiners must look to the common law, which can be a complex process. The examiners must collect and weigh evidence on the related common law factors to determine what is relevant for classifying each relationship between the respective businesses and the workers in question. Second, given competing agency priorities, IRS has limited resources to allocate to these programs. With regard to enforcement, it has resources to examine only a small percentage of the potential misclassification cases it detects. As shown in table 3, SB/SE completed examinations of less than 1,200 employers in 2008, a very small number when compared to the millions of small business and self-employed taxpayers in the United States. IRS focuses its examinations on employers with potential for large assessments or cases that likely affect a number of workers. To encourage voluntary compliance, IRS sends SS-8 determination letters to employers, and has also sent “soft notices” to employers it determined had not reclassified their workers after receiving these letters. However, IRS officials told us that SS-8 determination letters and soft notices can be ineffective if the letter or the notice signals that IRS will not further pursue the noncompliant employers. For example, according to these officials, only about 20 percent of employers that are sent SS-8 determination letters but that are not selected for examination voluntarily comply with IRS’s classification determination. With regard to education, IRS uses indirect methods to reach the millions of businesses across the United States, such as sending correspondence to a large list of contacts in various industries and posting information in industry newsletters. According to IRS employment tax officials, information on misclassification is generally passed down two or three levels in order to reach employers. Third, according to IRS officials we interviewed, section 530 is both a major reason that it cannot examine many of the suspected cases of misclassification it identifies and an impediment to its ability to educate taxpayers on misclassification issues, as discussed below. Before examining each potential misclassification case, IRS examiners must verify whether the employer in question qualifies for section 530 protection. This verification process can be time and labor intensive, because examiners must determine whether the employers in question meet the three tests for section 530 protection. Section 530 also restricts IRS’s ability to issue regulations and Revenue Rulings with respect to the classification of any individual for purposes of employment taxes. Because of this limitation, IRS restricts the educational information it issues to informal general guidance and SS-8 determinations and rulings, which provide recommendations on how to classify specific workers. However, as noted previously, applying the classification rules can be complex. IRS employment tax officials told us that businesses regularly request IRS’s guidance on how to classify workers. In accordance with section 530, IRS officials do not answer such inquiries but instead recommend that the businesses file Form SS-8 requests, which take time for the businesses to file and for IRS to process. Representatives of worker, business, and paid tax return preparer groups pointed to a great deal of confusion about proper worker classification. In an interview, representatives of IRS’s Taxpayer Advocate Service told us that IRS should have the ability to issue guidance on the rules it enforces, in the interest of effective tax administration. DOL and IRS typically do not exchange the information they collect on misclassification, and DOL does not share information internally. However, when an employee is misclassified there is a potential for violations of both tax and labor laws, and sharing information could enable multiple agencies to address the consequences of misclassification. For example, WHD does not always send information on cases involving misclassification to other federal and state agencies, although WHD’s policies and procedures direct it to share such information with other federal and state agencies. WHD officials said they may not provide referrals to states or other federal agencies because the definition of an employee varies by statute and the division does not want its investigators to interpret statutes outside its jurisdiction. WHD officials told us there were no legal limitations on sharing information from an investigation, although they said they were reluctant to share information on open cases because they did not want to compromise their investigations. Although WHD has a memorandum of understanding stating that it will share information with IRS, WHD officials said they are concerned about referring cases to IRS because they fear that employers would be reluctant to cooperate with the division if they knew that it refers cases to IRS. However, in these cases, WHD could obtain a subpoena to compel the employer to provide WHD with records. Similarly, WHD depends on complaints from workers to drive much of its workload and locate employers that are in violation of the laws under its purview. According to these officials, if workers who were not paying taxes properly knew that WHD shared information with IRS about its investigations, they might be less likely to file complaints or cooperate during investigations. In cases where WHD refers a case involving misclassification to states or other federal agencies, or to other divisions within DOL, it does not track these referrals centrally. Therefore, officials do not know how often or to whom cases are referred. In addition, officials are not able to ensure that cases are referred consistently across offices. Some district offices, however, keep track of the forms used to make such referrals. The referrals are usually made by the district offices, which maintain records of the referrals in their files and send the originals to the agencies to which WHD has referred the cases. OSHA may uncover misclassification during its inspections of potential health and safety violations but generally does not refer these cases to WHD or IRS. OSHA officials told us that although they have a number of memorandums of understanding with other agencies and divisions within DOL, these pertain to issues such as child labor and migrant workers and not to misclassification. However, we found that OSHA has a memorandum of understanding with WHD dating from 1990 that states that, in order to secure the highest level of compliance with labor laws, the agencies will exchange information and referrals where appropriate. This agreement also states that both agencies will report the results of any referrals to the other agency and will establish a system to monitor the progress of actions taken on referrals. However, while OSHA tracks referrals and results in its database, WHD has not established such a system. ETA, which oversees unemployment insurance, collects only summary data from states on the number of employees they have found to be misclassified during unemployment insurance audits. While DOL funds the administration of state unemployment insurance programs, states are responsible for all tax collection, benefit payment, and investigations and audits. Therefore, officials told us that detailed employer or employee- specific information is available only at the state level, and ETA is unable to refer potential misclassification cases to WHD. Moreover, since state agencies are administrators of their own programs, officials told us that ETA does not investigate instances of misclassification that occur in state unemployment insurance programs. Other federal agencies with jurisdiction over laws affected by misclassification told us that they do not work with DOL or track cases involving misclassification. Officials from the National Labor Relations Board, which enforces the right of employees to bargain collectively, told us that the agency does not work with DOL. Equal Employment Opportunity Commission officials said that they have not worked with DOL in any substantial way, although they do have a memorandum of understanding with DOL. According to officials, IRS does not share misclassification-related information with DOL and shares only limited information with other federal agencies. In general, IRS is prohibited from sharing taxpayer information with other agencies per section 6103 of the Internal Revenue Code. IRS and the Social Security Administration have memorandums of understanding in place to facilitate information sharing on employment tax cases and issues, but they do not regularly share information on misclassification, according to IRS employment tax officials. However, the officials told us that the two agencies are creating a joint employment tax task team, and noted that the Social Security Administration can use IRS employment tax information to ensure that misclassified workers are given Social Security credit for wages earned. Contracting officers from several federal agencies we interviewed said that they saw relatively high volumes of potential misclassification among workers on federal construction contracts, and that the payroll information they collect could be of value to IRS. However, many of these agencies did not have information sharing relationships with IRS. Less than 25 percent of states collaborate with DOL to identify employee misclassification. In responding to our survey, 12 states said that they have some type of collaborative arrangement with DOL in this area. These arrangements may include sending information to DOL, receiving information from DOL, and conducting joint investigations with DOL of cases involving potential misclassification. Approximately 56 percent of states we surveyed said that they collect data on misclassification beyond the summary unemployment insurance audit data they are required to report to DOL’s ETA on a quarterly basis. Although this information could be useful to DOL in pursuing potential FLSA violations stemming from misclassification, state officials we interviewed said that they are not required to report it to DOL. For example, officials told us that they do not report information on employees who were misclassified but paid in cash and whose wages were not reported to IRS or state revenue agencies. DOL could use information on these employees to target investigations of possible FLSA violations, such as improper payment of overtime. IRS and state workforce agencies share information on misclassification as part of QETP. IRS, DOL, and state workforce agencies collaborated to create QETP in September 2005. In its first year, 5 states participated and additional states have been added over time. Currently, IRS and workforce agencies from 34 states share information on audits involving misclassification as part of QETP. IRS employment tax officials remarked that QETP sends an important message to employers and workers that IRS and states are working together on compliance issues. According to the IRS officials, the state agencies audit employers to determine whether they have classified workers correctly and paid state unemployment taxes as appropriate. We surveyed participating state agencies, and most respondents reported that audit information IRS provided was helpful. In addition to sharing audit reports for employers that were fo misclassified their employees, IRS also shares other types of misclassification-related data with some states. Nineteen of the state workforce agencies we surveyed reported that they receive Form 1099- MISC data from IRS. The state agencies may use these data to identify potential cases of misclassification. According to IRS employment tax officials, IRS also shares the worker classification determinations it m through its SS-8 program with some state agencies; IRS issues these determinations following employers’ or workers’ requests for determinations of employment status. Fourteen of the state workforce agencies we surveyed reported receiving this information from IRS. Some state workforce agencies surveyed noted that IRS’s QETP information sharing and communication practices could be improved. Fo example, two states commented that the information they receive fr IRS is somewhat dated. Some states that participated in our survey reported frustration over not receiving requested information from IRS or difficulty contacting IRS officials. IRS officials with whom we spoke were aware that some states were not receiving QETP referrals, and stated that IRS was in the process of centralizing its QETP administration in order to rectify the problem. They also said that IRS is in the process of clearing out a backlog of referrals from states. According to IRS employment tax officials, IRS has completed the centralization of QETP administration taken steps to clear the backlog of referrals from states. Finally, some and states we surveyed also reported several key barriers to effectively using information provided by IRS. These included resource limitations w ithin their own agencies, data system incompatibilities, and difficulties complying with IRS’s legal requirements for safeguarding taxpayer data. Some states have made efforts to address misclassification and have reported successful collaboration among their own agencies. States are particularly concerned because of misclassification’s impact on worker compensation programs and unemployment tax revenue, among other programs. In addition, states may incur additional costs, such of providing health care to uninsured workers, as a result of misclassification. Some states have passed legislation related to misclassification. For example, Massachusetts passed legislation that standardizes the definition of an employee and penalizes employers for misclassification, regardless of whether it was intentional. The statuteauthorizes the state Attorney General to impose substantial civil and criminal penalties and, in certain circum obtaining state public works contracts. stances, to ban violators from Several states have recently created interagency initiatives or joint task forces aimed at detecting misclassification, often by executive order of states’ governors. These task forces share information across revenue, labor, and enforcement agencies. For example, the New York State Jo Enforcement Task Force on Employee Misclassification, which was formed in September 2007, is led by the New York Department of Lab and includes revenue agencies, other enforcement agencies, and the Attorney General’s office. Since its inception, the task force has engag joint enforcement sweeps, coordinated assignments, and systematic referrals and data sharing between state agencies. New York state officials told us that they now consider it customary to use a multiagen cy approach and cross-agency coordination to deal with misclassification. However, some of these state task forces have encountered challenges particularly in coordination among state agencies. The agencies must overcome or ease restrictions on sharing information outside their jurisdictions, which may require state legislative action. State officials we interviewed cited other challenges, such as the fact that the lead agency does not have oversight authority over task force members, which makes it difficult to direct their efforts; the limited resources of many state agencies; and dealing with the added layers of bureaucracy involved in tracking cases and enforcing compliance together. , While these task forces are relatively recent innovations, state officials told us that they have already been effective in uncovering misclassification. New York state officials told us that the state uncovers many more misclassified employees through task force activities than solely through the unemployment insurance audits required by DOL. The state estimated that in just over a year’s time, its misclassification task force uncovered 12,300 instances of employee misclassification and, as noted earlier, $157 million in unreported wages. The task force’s enforcement activities also resulted in over $12 million in workers’ back wages being assessed against employers. As far back as 1977, we have analyzed options for addressing tax noncompliance arising from employee misclassification. In 1977, we recommended a specific definition to clarify who should be considered an independent contractor, and in 1979, we concluded that some form of tax withholding could be warranted to reduce tax noncompliance among self- employed workers. In 1992, we offered options to improve independent contractor tax compliance, such as ensuring that their taxpayer identification numbers (TIN) are valid, informing them of their classification status and tax obligations, and closing gaps in the payments that are required to be reported on Form 1099-MISC. For this report, we explored current options to address the challenges raised by employee misclassification, some of which are similar to the options we analyzed in these prior reports. We identified 19 options to address the challenges raised by employee misclassification by reviewing literature and speaking with various groups, including those representing (1) labor and advocacy, (2) independent contractors and small businesses, and (3) tax professionals. These options would require either legislative or administrative actions. Table 4 lists the 19 options. The list is not ranked in any order, but rather is grouped in seven broad categories. We asked 11 external stakeholders to provide input on these 19 options, including (1) the extent to which they supported or opposed each option and (2) the benefits and drawbacks of each option (see app. II for a summary of these benefits and drawbacks for each option). These stakeholders included 4 groups that represent the views of small businesses, independent contractors, and those who hire them (i.e., independent contractor groups); 4 groups that represent the views of organized labor (i.e., labor groups); 2 groups that represent the tax preparation and advice community; and 1 federal agency that uses contractors. We received responses from 9 of these groups. Stakeholders did not unanimously support or oppose any of the 19 options. Although views were mixed, stakeholders generally expressed support for the options more frequently than they expressed opposition. For example, at least seven of the nine responding stakeholders supported three options (see table 5). In contrast, five of nine stakeholders opposed one option—narrowing the definition of “a long-standing recognized practice of a significant segment of the industry” under section 530 of the Revenue Act (option 5). While all three independent contractor groups opposed this idea on the grounds that the protection was important, two labor groups that opposed the option did so because it only narrowed rather than eliminated this protection. In general, labor groups, a group representing tax preparers, and a federal agency that hires contractors tended to be more supportive of the 19 options than independent contractor groups. We analyzed whether the majority of stakeholders in each group—that is, over half of them—stated that they supported, opposed, or were neutral on the 19 options. Table 6 shows that a majority of the labor group respondents (i.e., at least 3 of the 4) supported 9 options and opposed none. Similarly, the tax professional group and the federal agency both supported 10 options and opposed none. In contrast, a majority of the independent contractor respondents (i.e., at least 2 of the 3) supported 7 options and opposed 8. A blank cell in the table indicates that the stakeholders for the group lacked a majority view on the option. We asked stakeholders what they perceived to be the benefits and drawbacks of each option. We did not follow up on these responses to clarify and understand the basis for the stakeholders’ perceptions on benefits and drawbacks. As a result, absent other relevant data, these responses did not allow us to uniformly assess whether the benefits outweighed the drawbacks for each option, or vice versa. Table 7 lists examples of types of benefits and drawbacks identified across all the options. We found that some of the stakeholders had different perceptions of whether an outcome for an option would be beneficial. For example, some respondents said that creating an online classification system could help reduce confusion over classification rules and unintentional misclassification. However, other respondents stated that such a system would produce inconsistent determinations and could be manipulated to achieve desired classification determinations. Similarly, some stakeholders said that requiring a separate TIN for independent contractors could increase voluntary tax compliance or help facilitate IRS compliance and enforcement efforts. However, others expressed the opinion that a separate TIN could be conducive to tax fraud or manipulation of the classification system. Finally, some perceived that expanding CSP to include employers that volunteer to disclose their misclassified employees would benefit such employers by reducing their financial exposure while others viewed this same outcome as allowing them to escape financial sanctions for misclassifying. (See app. II for summaries of the types of benefits and drawbacks for each option.) We also asked IRS officials to share their insights on the benefits and drawbacks of the options from a tax administration perspective. Some of their insights included the following: Expanding CSP to include employers that voluntarily ask to participate could help reduce employee misclassification, although allowing voluntary participation raises issues of equity and may create a safe harbor from examination. For example, this expansion could bring into compliance employers that voluntarily disclose that they have misclassified employees but would reduce the financial sanctions they face for having done so. IRS employment tax officials said that they recently created a team to explore these and other issues related to such an expansion and that they hope to start soliciting comments on a proposal from across IRS starting in summer 2009. “Soft” (i.e., non enforcement) notices to educate employers that appear to be misclassifying employees and to encourage them to correct their classifications might not be effective unless IRS is able to follow up with employers that do not change their classification behavior. Notices also are more effective if they are sent strategically rather than using a “shotgun” approach. Furthermore, sending notices to employers in certain industries without sufficient justification for targeting them likely would create a backlash that IRS would have to manage. Expanded information sharing with other federal agencies generally can help IRS to be more effective at enforcing proper worker classification. However, section 6103 protections against improper disclosure of tax data generally hamper such sharing and one-way information sharing can create resentment among other agencies. Creating standardized documents on worker rights and tax obligations can impose burdens on businesses, although such burdens could be reduced by requiring employers to provide such documents only to newly hired or retained workers rather than to all workers. Also, IRS may not currently have the authority to require employers to provide such documents to workers. Requiring a separate TIN for each independent contractor could help compliance but would impose some costs on businesses and IRS to reprogram its computers. Requiring Forms SS-8 for all newly retained independent contractors would create tremendous costs for IRS, and it may not be able to review the forms quickly enough to affect some independent contractors who employers retain on a short-term basis. An online classification system that uses factors like those that IRS uses to make Form SS-8 determinations could provide guidance to those unsure about classifying workers. However, the system should not be used to make classification determinations because those entering the data could manipulate their entries to receive a desired outcome. Some of the identified options relate to goals, objectives, and strategies in IRS’s Strategic Plan for 2009-2013. For example, IRS’s plan envisions placing more emphasis on providing more targeted and timely guidance and outreach on how to voluntarily comply and creating opportunities for taxpayers to proactively resolve tax disputes as soon as possible as part of its goal to improve service to make voluntary compliance easier. To enforce the law to ensure that everyone meets their tax obligations, IRS plans to strengthen its partnerships with other government agencies to leverage resources in a way that allows quick identification and pursuit of emerging tax schemes through education as well as enforcement. IRS also seeks to expand its enforcement approaches by allowing for alternative treatment of potential noncompliance. These approaches include expanding the use of soft notices to educate taxpayers and to encourage them to self-correct to avoid traditional enforcement contacts, such as examinations, as well as expanding incentives and opportunities for taxpayers to voluntarily self-correct noncompliant behavior. Misclassification can have a significant impact on federal and state programs, businesses, and misclassified employees. It can reduce revenue that supports such programs as Social Security, Medicare, unemployment insurance, and workers’ compensation. Further, employers with responsible business practices may be undercut by competitors who misclassify employees to reduce their costs, for example, by not paying payroll taxes or providing benefits to workers. Employers may also exploit vulnerable workers, including low-wage workers and immigrants, who are unfamiliar with laws pertaining to employment relationships, including laws designed to protect workers. For example, misclassified workers may not be paid properly for overtime or may not know that their employers are not paying worker’s compensation premiums. Although misclassification is a predictor of labor law violations, and although state examples show that targeting misclassification is an effective way to uncover violations, DOL is not taking advantage of this opportunity by looking for misclassification in its targeted investigations. As a result, employers may continue to misclassify employees without consequences and workers may remain unprotected by labor laws and not receive benefits to which they are entitled. Furthermore, because DOL conducts limited education and outreach on misclassification, many workers have insufficient information on employment relationships and may not understand their employment status and rights. In addition, vulnerable populations, including low-wage workers and immigrants, may not know they are misclassified and, as result, may not receive the protections and benefits to which they are entitled. By not regularly sharing information on cases involving misclassification, federal and state agencies are also losing opportunities to protect workers and to make the most effective use of their resources. Also, because DOL is not working with states active in this area to identify misclassification, it is not using its resources most effectively by establishing a collaborative effort between federal and state agencies to address misclassification. Many of the IRS-related options we analyzed to address misclassification were generally perceived to have merit as means to address misclassification, but all have some drawbacks, according to those stakeholders we surveyed. Although several options had support from many of those who provided input, we had no reliable measure of the extent of misclassification and did not have sufficient information to weigh the benefits compared to the drawbacks of the options given the scope of our work. Even so, qualitative information provided by the stakeholders can help policymakers and tax administrators judge whether any of the options merit pursuit. Likewise, some actions have potential to address misclassification in a cost-effective manner while also adhering to IRS’s strategic vision for the next few years. For example, IRS and DOL can do more to educate employers and workers. Given that most complaints come from workers, further educating them about the consequences of misclassification may be especially useful. Developing a standard document on classification rights and related tax obligations that all new workers would either be given by employers or referred to on agencies’ Web sites would be particularly well targeted. Similarly, IRS could build on its existing state contacts to resolve current concerns with the QETP initiative, which mutually benefits both federal and state parties. Regularly collaborating with participating states can help ensure that issues are addressed by both IRS and states in a timely manner. Finally, expanding CSP to allow for voluntary self-correction of classification decisions could prompt compliance among employers that IRS is unlikely to pursue through enforcement because of limited resources. Soft notices targeted to employers that appear to be misclassifying would give them a chance to self correct before IRS decides whether to examine them and should be tested to determine their effectiveness. We are making six recommendations to the Secretary of Labor and the Commissioner of Internal Revenue to assist in preventing and responding to employee misclassification. To increase its detection of FLSA and other labor law violations, we recommend that the Secretary of Labor direct the WHD Administrator to increase the division’s focus on misclassification of employees as independent contractors during targeted investigations. To enhance efforts to protect workers and make the most effective use of their resources, we recommend that the Secretary of Labor direct the WHD Administrator and the Assistant Secretary for OSHA to ensure that information on cases involving the misclassification of employees as independent contractors is shared between the two entities and that cases outside their jurisdiction are referred to states and other relevant agencies, as required. To identify promising practices in addressing misclassification and use agency resources most effectively, we recommend that the Secretary of Labor and the Commissioner of Internal Revenue establish a joint interagency effort with other federal and state agencies to address the misclassification of employees as independent contractors. Because tax data may provide useful leads on noncompliance, the task force should determine to what extent tax information would assist other agencies and, if it would be sufficiently helpful, seek a legislative change through the Department of the Treasury to allow for sharing of tax information with appropriate privacy protections. To enhance understanding of classification issues by workers—especially those in low-wage industries—we recommend that the Secretary of Labor collaborate with the Commissioner of Internal Revenue to offer education and outreach to workers on classification rules and implications and related tax obligations. Such collaboration should include developing a standardized document on classification that DOL would require employers to provide to new workers. To maximize the effectiveness of the relatively new QETP initiative, we recommend that the Commissioner of Internal Revenue create a forum for regularly collaborating with participating states to identify and address data sharing issues, such as ensuring clear points of contact within IRS for states and expeditious sharing of data. To increase proper worker classification, we recommend that the Commissioner of Internal Revenue extend the CSP to include employers that volunteer to prospectively reclassify their misclassified employees, and as part of this extension test whether sending notices describing the program to potentially noncompliant employers would be cost effective. Employers to which IRS would send notices could include those referred for examination but who may not be examined because of higher priorities, resource limitations, or other reasons. In their comments on a draft of this report, both DOL and IRS generally agreed with our recommendations, and either agreed to implement or to take steps consistent with our recommendations, such as exploring their implementation. WHD, OSHA, and IRS provided written comments on the draft, which are reprinted in their entirety in appendixes III (DOL comments from WHD and OSHA) and IV (IRS comments). In addition, ETA provided technical comments, which we incorporated. DOL agreed with our recommendation to increase WHD’s focus on misclassification of employees as independent contractors during targeted investigations. WHD commented that it would reexamine its training documents and field guidance to ensure that employee classification was addressed during all stages of an investigation. In addition, WHD agreed to focus on increasing compliance for workers in industries where misclassification is prevalent. DOL also agreed that there is value in sharing information on cases involving the misclassification of employees as independent contractors between WHD and OSHA and with state agencies. WHD and OSHA stated that they are both committed to working closely together to exchange information and improve protections afforded workers. In addition, WHD said that it would assess its current referral processes to ensure that they adequately provided for referrals to other agencies in cases related to employee misclassification. In their comments, the agencies expressed support for our recommendation to establish a joint interagency effort to address misclassification. DOL stated that a joint effort between DOL and IRS may prove useful in WHD’s efforts to enforce wage and hour laws, and that WHD would participate in any such interdepartmental effort. Similarly, IRS stated that coordination between departments and agencies at the federal and state levels is an effective way to encourage voluntary compliance and agreed to work with the Secretary of Labor to explore developing a joint effort, subject to disclosure rules under section 6103 of the Internal Revenue Code and other privacy rules. In addition, DOL and IRS agreed to explore opportunities to collaborate to offer education and outreach to workers on the topic of worker classification, including developing a standardized document that DOL would require employers to provide to new workers. WHD agreed to reach out to IRS to explore opportunities for joint outreach to workers, and IRS agreed to collaborate with the Secretary of Labor, make education and outreach materials available to DOL, and work with the Secretary of Labor to explore developing a standardized document on classification for DOL to provide to new workers. Finally, IRS agreed to work with state workforce agencies participating in QETP to establish a forum to identify and address data sharing and IRS points of contact issues using its Enterprise Wide Employment Tax Program. IRS also said it would consider expanding the CSP to employers not under examination and commented that if it decides to expand the program, it will consider all options, including issuing notices and soft letters and soliciting volunteers through outreach and education. We appreciate that IRS will consider these actions and continue to believe that extending the CSP to include employers that volunteer to prospectively reclassify their misclassified employees would be an effective way to increase proper worker classification and that it would be useful to test whether sending notices would be a cost-effective feature of an expanded program. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Labor, the Commissioner of Internal Revenue, and relevant congressional committees. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. Please contact Andrew Sherrill at (202) 512-7215 or sherrilla@gao.gov or Michael Brostek at (202) 512-9110 or brostekm@gao.gov if you or your staffs have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine what is known about the extent of the misclassification of employees as independent contractors and its associated tax and labor implications, we reviewed studies on misclassification conducted by the Internal Revenue Service (IRS), the Department of Labor (DOL), and others. We reviewed IRS’s estimate on the extent of misclassification and the associated revenue loss for tax year 1984. We also interviewed IRS officials responsible for planning an update to that estimate. From DOL, we reviewed a study it commissioned in 2000 on the extent of misclassification. We also analyzed the information states report to it regarding their findings of misclassification during their audits of employers. We analyzed summary data that the states reported for the years 2000 to 2007. These data included the number of employers in each state, the number of audits completed, and the number of misclassified employees identified during these audits. We also reviewed misclassification studies conducted by states, universities, and research institutes. Finally, we interviewed officials from federal and state agencies to obtain their views on misclassification and its consequences for workers. To describe actions taken by DOL to address employee misclassification, we examined DOL policies and documentation, including DOL’s Wage and Hour Division’s (WHD) Field Operations Handbook and the Occupational Safety and Health Administration’s Field Operations Manual. We interviewed agency officials at the national and district levels, as well as several investigators from WHD, and spoke with employer and labor advocates to obtain their perspectives on DOL’s efforts. In some cases, we relied on interviews conducted for a previous closely related GAO testimony, issued in July 2008. We also obtained and analyzed WHD data on cases involving misclassification concluded during fiscal year 2008. We could not obtain data for other time periods because WHD did not flag cases to indicate whether they involved misclassification before fiscal year 2008. We assessed the reliability of the data and determined them to be sufficiently reliable for the purposes of this report. However, because DOL only flagged cases as involving misclassification when it was the primary reason for Fair Labor Standards Act (FLSA) violations, and because WHD officials told us that not all investigators understood how to properly flag these cases, this information may be incomplete. In total, we examined data for 131 cases involving 1,619 misclassified employees who were denied payment for overtime or were paid less than minimum wage. Using these data from the WHD database, we judgmentally selected 26 case files to review. We selected cases based on factors such as the number of employees misclassified, the total amount of back wages computed, whether a single employee was owed over $10,000 in back wages, whether civil money penalties were assessed, and whether the case resulted from a complaint or was directed by the agency. We conducted reviews of 13 case files in the WHD New Orleans and Boston offices and requested copies of the remaining selected case files from WHD. Because we judgmentally selected these files, our findings from the reviews of case files are not projectable to all WHD cases. To obtain information on state coordination with DOL and IRS, state perspectives on DOL’s education and outreach efforts, and whether states collect data on cases involving misclassification, we conducted a Web- based survey of unemployment insurance directors in all states, the District of Columbia, and Puerto Rico. We administered two versions of this survey: one for states participating in the Questionable Employment Tax Practices (QETP) program and one for states that do not participate in the QETP program. After we drafted the questionnaire, we asked for comments from a knowledgeable official at the National Association of State Workforce Agencies as well as from an independent GAO survey professional. We conducted two pretests of the survey, one with a state participating in the QETP program and one with a state that does not participate in the QETP program, to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. We received responses from all 32 states on the survey for QETP participants, for a response rate of 100 percent. We did not receive a response from 1 state on the survey for states that do not participate in QETP, for a response rate of 95 percent. We were unable to contact the official in Puerto Rico within the study’s time period. Finally, we interviewed officials in 4 states to obtain more information about their efforts to address misclassification and, where applicable, reviewed documentation on these efforts. To describe actions IRS takes to address employee misclassification, we interviewed officials from the employment tax group within IRS’s Small Business/Self Employed Division (SB/SE), which conducts the majority of IRS misclassification-related examinations. We also obtained data on SB/SE examinations of worker misclassification for tax year 2008 generated from four sources: (1) the Determination of Worker Status (Form SS-8) program, (2) the Employment Tax Examination Program (ETEP), (3) QETP, and (4) general IRS employment tax examinations, including cases referred from other divisions within IRS. SB/SE conducted all IRS misclassification examinations generated by ETEP and QETP, over 97 percent of the examinations generated by the SS-8 program, and the majority of general examinations IRS conducted during fiscal year 2008. We also obtained data from IRS’s Classification Settlement Program. We assessed the reliability of these IRS data sources and found them to be sufficiently reliable for the purposes of this report. To obtain information on IRS’s education and outreach activities that address misclassification, we interviewed officials from the employment tax group within SB/SE, interviewed independent contractor and labor advocates, and reviewed educational materials on classification IRS makes available on its Web site. To understand how DOL and IRS cooperate with each other and with states and other relevant agencies, we examined agency policies and procedures for sharing information on misclassification and referring cases involving misclassification, and interviewed agency and state officials. We also reviewed information IRS provided on its arrangements with states through the QETP program. To describe options that could help address challenges in preventing and responding to misclassification, we reviewed GAO and other federal agency reports and recommendations and other organizations’ studies on misclassification of employees. We also interviewed 19 relevant stakeholders representing various groups, including (1) labor and advocacy groups, (2) groups that represent small businesses and independent contractors, (3) groups that represent tax professionals, (4) authors who have published on misclassification issues, and (5) federal agencies, to help identify options and summarize any associated trade-offs. Based on those discussions, we identified 19 options to include in this report. We originally identified over 100 options but reduced the list to 19 options that directly addressed misclassification challenges and issues, were not already being implemented, and were distinct from each other. In addition, we did not include other options that we have recently analyzed or recommended in prior reports on misclassification or that are indirectly related to worker misclassification, such as for information reporting on payments made to independent contractors. We surveyed 11 stakeholders for their views on the 19 options we identified, asking them to state their level of support or opposition to the options and what they perceived to be the strengths and drawbacks of each option. These stakeholders included 4 groups that represent the views of small businesses, independent contractors, and those who hire them (i.e., independent contractor groups); 4 groups that represent the views of organized labor (i.e., labor groups); 2 groups that represented the tax preparation and advice community; and 1 federal agency that uses contractors. We received responses from 9 of these groups. We analyzed the responses we received in order to present summary information in the report. We conducted this performance audit from August 2008 through July 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We identified 19 options to address challenges involved with preventing and responding to worker misclassification by reviewing related literature and interviewing knowledgeable persons about misclassification. As we identified these options, we asked these stakeholders for their views on the options, including what they considered to be the benefits and drawbacks of each. These stakeholders included IRS officials and representatives of organizations representing workers, independent contractors, tax professionals, and a federal agency that hires contractors. The following is a summary of the options and their perceived associated benefits and drawbacks. Neither the list of options nor the list of their perceived associated benefits and drawbacks is exhaustive. Some of the options are concepts rather than fully developed proposals with details of how they would be implemented. Additional detail could bring more benefits and drawbacks to light. The benefits and drawbacks are not weighted and are not listed in order of importance or by frequency of mention. Options should not be judged by the number of benefits and drawbacks. Some of the options overlap, covering more than one problem, while other options only deal with specific aspects of a problem. A. Clarify the employee/independent contractor definition and expand worker rights 1. Clarify the distinction between employees and independent contractors under federal law by unifying multiple definitions, limiting the number of factors used to make determinations, and making the factors more conclusive Could reduce manipulation of classification rules Could improve equity and efficiency of classification rules Could improve worker protection if an expansive definition is adopted Could improve objectivity of rules/reduce confusion Lobbying and political compromises could weaken the definition Lobbying and political compromises could lead to a more restrictive Could lead to increased litigation if a new definition has no history or Could create transitional costs and upheavals in working relationships Could deter use of independent contractors A “one-size-fits-all” approach may cause imbalances and more problems than it solves in certain industries IRS and government agencies could incur costs to administer a new definition Could sidetrack key anti-abuse reforms No need to harmonize definitions since courts work well in doing so Could encourage more employers to engage in fraud 2. Allow workers to challenge classification determinations in U.S. Tax Could increase equity and protections for workers Could reduce incentives for misclassification Could result in more or unnecessary litigation Would be unfair to businesses Could deter use of independent contractors Too narrow to limit challenges to just Tax Court and just workers 3. Ensure that workers have adequate legal protection from retaliation for filing a Form SS-8 Could help reduce misclassification/improve misclassification Could help improve worker protection and justice Could result in more litigation Limits ability of employers to end contractual relationships as needed Could reduce use of independent contactors Not necessary because retaliation is rare and independent contractors can protect themselves through a contract Does not include worker protection for other actions to challenge 4. Define misclassification as a violation under FLSA Could help increase voluntary compliance Would allow federal agencies, including DOL, to take greater Could increase costly lawsuits for businesses Could deter use of independent contractors Unfair to penalize businesses and contractors for confusing and subjective regulations B. Revise section 530 of the Revenue Act of 1978 5. Narrow the definition of “a long-standing recognized practice of a significant segment of the industry” so that fewer firms qualify under this reasonable basis for the section 530 safe harbor Could reduce incentive to misclassify and increase voluntary Could help reduce tax gap related to misclassification Opens the door to eroding the protection of section 530 Could create inequities among those who use independent contactors Could lead to economic disruption or upheaval in some industries Ignores unique issues that some industries possess Unnecessary because current definition can be hard to meet Only narrows rather than eliminates “industry practice” 6. Lift the ban on IRS/Treasury regulations or revenue rulings clarifying the employment status of individuals for purposes of employment taxes Could reduce requests for individual classification determinations and More consistent application of the rules Could allow IRS to more effectively prevent misclassification and Could improve understanding and reduce confusion over classification No need because existing case law is sufficient Could erode section 530 protection Could increase litigation and lobbying costs IRS cannot fix the classification problem without congressional guidance A national standard would not affect state definitions Political influences could slant the new guidance C. Provide additional education and outreach 7. Require service recipients to provide standardized documents to workers that explain their classification rights and tax obligations Could help reduce misclassification by reducing errors Could help educate workers about classification Could discriminate against some independent contractors Relies on employers instead of IRS to inform workers Could be ineffective if workers cannot understand the documents Employers would incur costs and burdens 8. Expand IRS outreach to service recipient, worker, and tax advisor groups to educate them about classification rules and related tax obligations, targeting groups IRS deems to be “at risk” Could improve uniformity of classifications Could reduce misclassification by reducing errors Could deter use of independent contractors Could divert IRS resources from enforcement Does not target tax advisors who facilitate misclassification Could lead to unfair targeting of business groups Could lead to independent contractors suing their clients 9. Create an online classification system, using factors similar to those used in the SS-8 determination process, to guide service recipients and workers on classification determinations Uses electronic instead of paper-based processes Could minimize the need for SS-8 determinations Could provide more information to workers and service recipients Could streamline decision making on classifications Could reduce confusion and unintentional misclassification IRS would incur costs to develop system Still relies on subjective weighting of evidence and is likely to produce Not all workers have access to computers Could be manipulated by employers to attain desired classification 10. Increase the use of IRS notices to service recipients in industries with a potentially high incidence of misclassification to educate them about the classification rules and ask them to review their classification practices Could improve understanding of correct classification IRS would incur costs to develop and mail notices Could be ineffective if not combined with IRS enforcement Could expose employers to more litigation Could create adversarial relationships between employers and workers Could be unfair to targeted industries D. Withhold taxes for independent contractors 11. Require service recipients to withhold taxes, with rates at an adequate level to induce compliance, for independent contractors whose taxpayer identification numbers (TIN) cannot be verified or if notified by IRS during the TIN verification process that the contractors are not fully tax compliant Could help improve voluntary filing and tax compliance by having Would impose costs and burdens on employers Does not hold employers financially accountable for misclassification TIN verification is not effective Could result in withholding errors 12. Require universal tax withholding for payments made to independent contractors using tax rates that are relatively low (e.g., 1 percent to 5 percent of payment amounts) In addition to the contacts named above, Revae Moran, Acting Director; Tom Short, Assistant Director; Amy Sweet, Analyst-in-Charge; Jeff Arkin, Analyst-in-Charge; Susan Bernstein; Jessica Bryant-Bertail; Scott Charlton; Doreen Feldman; Jennifer Gravelle; Maura Hardy; David Perkins; Ellen Phelps Ranen; Albert Sim; Andrew J. Stephens; and Gregory Wilmoth made key contributions to this report.
When employers improperly classify workers as independent contractors instead of employees, those workers do not receive protections and benefits to which they are entitled, and the employers may fail to pay some taxes they would otherwise be required to pay. The Department of Labor (DOL) and Internal Revenue Service (IRS) are to ensure that employers comply with several labor and tax laws related to worker classification. GAO was asked to examine the extent of misclassification; actions DOL and IRS have taken to address misclassification, including the extent to which they collaborate with each other, states, and other agencies; and options that could help address misclassification. To meet its objectives, GAO reviewed DOL, IRS, and other studies on misclassification and DOL and IRS policies and activities related to classification; interviewed officials from these agencies as well as other stakeholders; analyzed data from DOL investigations involving misclassification; and surveyed states. The national extent of employee misclassification is unknown; however, earlier and more recent, though not as comprehensive, studies suggest that it could be a significant problem with adverse consequences. For example, for tax year 1984, IRS estimated that U.S. employers misclassified a total of 3.4 million employees, resulting in an estimated revenue loss of $1.6 billion (in 1984 dollars). DOL commissioned a study in 2000 that found that 10 percent to 30 percent of firms audited in 9 states misclassified at least some employees. Although employee misclassification itself is not a violation of law, it is often associated with labor and tax law violations. DOL's detection of misclassification generally results from its investigations of alleged violations of federal labor law, particularly complaints involving nonpayment of overtime or minimum wages. Although outreach to workers could help reduce the incidence of misclassification, DOL's work in this area is limited, and the agency rarely uses penalties in cases of misclassification. IRS enforces worker classification compliance primarily through examinations of employers but also offers settlements through which eligible employers under examination can reduce taxes they might owe if they maintain proper classification of their workers in the future. IRS provides general information on classification through its publications and fact sheets available on its Web site and targets outreach efforts to tax and payroll professionals, but generally not to workers. IRS faces challenges with these compliance efforts because of resource constraints and limits that the tax law places on IRS's classification enforcement and education activities. DOL and IRS typically do not exchange the information they collect on misclassification, in part because of certain restrictions in the tax code on IRS's ability to share tax information with federal agencies. Also, DOL agencies do not share information internally on misclassification. Few states collaborate with DOL to address misclassification, however, IRS and 34 states share information on misclassification-related audits, as permitted under the tax code. Generally, IRS and states have found collaboration to be helpful, although some states believe information sharing practices could be improved. Some states have reported successful collaboration among their own agencies, including through task forces or joint interagency initiatives to detect misclassification. Although these initiatives are relatively recent, state officials told us that they have been effective in uncovering misclassification. GAO identified various options that could help address the misclassification of employees as independent contractors. Stakeholders GAO surveyed, including labor and employer groups, did not unanimously support or oppose any of these options. However, some options received more support, including enhancing coordination between federal and state agencies, expanding outreach to workers on classification, and allowing employers to voluntarily enter IRS's settlement program.
The Olympic Games take place every 4 years, with the Winter and Summer Games alternating on a 2-year cycle. The Ted Stevens Olympic and Amateur Sports Act (Amateur Sports Act), 36 U.S.C.§ 220501 et seq., which was originally enacted in 1978 as the Amateur Sports Act, gives the U.S. Olympic Committee (USOC) exclusive jurisdiction over all matters pertaining to the participation of the United States in the Olympic Games, including the representation of the United States in such Games and the organization of the Games when held in the United States. The Amateur Sports Act was amended in 1998 to incorporate the Paralympic Games under the umbrella of USOC. Although organized separately, the 1996 Summer Olympic Games marked the first time the Paralympic Games were held in conjunction with the Olympic Games in the United States. On May 27, 1997, SLOC was awarded the rights to host the 2002 Winter Paralympic Games. Lake Placid, NY, a small village with a population of 3,500 at the time, served as the host city for the Winter Olympic Games in 1980. According to information compiled by a USOC official, 1,072 athletes from 37 countries participated in 38 skiing, skating, and sledding events at 6 venue locations for an audience of 517,000 people. At that time, the Paralympic Games were not held in conjunction with the Olympic Games. According to a Department of Commerce report following the Games, and as shown in figure 1, about $363 million was spent on planning and staging these Games. The Lake Placid Organizing Committee (LPOC) funded about $121 million (33 percent) of the total cost. The State of New York provided about $63 million (17 percent) for building and constructing venues, such as the alpine, cross-country, and biathlon skiing facilities in the Lake Placid area. In addition, the federal government provided about $179 million (50 percent) in funding and support. According to the Department of Commerce report, of the $179 million in federal funding and support, Congress specifically designated about $96 million for Olympic-related projects, and the remaining approximately $83 million was approved and provided through the normal funding procedures of the departments of Defense, Transportation, Commerce, Energy, the Interior, and Justice. Collectively, the departments of Commerce and the Interior provided about $102 million (57 percent) of the total federal expenditures. As shown in figure 2, federal funding and support for the Games, or about $127 million (71 percent of the federal share), was used for venue construction (37 percent) and housing for the athletes (34 percent). Specifically, the federal government helped finance the ski jumps, speed skating oval, skating arena, winter sports arena, luge run, parking facilities, dressing rooms, and storage facilities. The federal government also provided housing and infrastructure support projects for the athletes, trainers, and coaches, such as the temporary and permanent buildings erected on a 55-acre site near Lake Placid used to house the athletes participating in the Games. These facilities were also used for security operations. The remaining direct federal funding and support were used for safety- and security–related activities, which accounted for about $23 million (13 percent); transportation projects, such as highway, airport, and railway improvements, which accounted for about $16 million (9 percent); and staging-and-operations activities during the Olympic events, which accounted for about $13 million (7 percent). Appendix II lists the specific federally sponsored Olympic-related projects and activities and the amounts of federal funding and support for each. In February 2002, according to SLOC officials, Salt Lake City will become the largest city to host the Winter Olympic Games and will also become the host of the largest Winter Olympic Games held to date. SLOC officials expect that this city, with a population of approximately 1.5 million people, will host 3,500 Olympic athletes participating in 70 sporting events at 10 venues. Additionally, SLOC officials expect 1,100 Paralympic athletes to participate in 34 sporting events at 10 venues. As of July 31, 2001, the total direct cost for projects and activities related to planning and staging the 2002 Winter Olympic Games in Salt Lake City is estimated at $1.9 billion. As shown in figure 3, SLOC plans to fund about $1.3 billion (70 percent). Additionally, Utah state officials working with SLOC report that Utah state agencies and institutions are planning to provide about $150 million (8 percent) and the Salt Lake City local government is planning to provide about $75 million (4 percent) for such projects as roads and bus systems directly related to supporting the Games. Finally, of the $1.9 billion, it is estimated that the federal government will provide approximately $342 million (18 percent) of the total direct cost for planning and staging the Games. Specifically, 18 federal agencies reported that they have provided or plan to provide an estimated $342 million in funding and support for projects and activities directly related to the planned 2002 Games. Of the $342 million in federal funding and support provided or planned for the 2002 Games, Congress had specifically designated about $208 million (61 percent) for specific Olympic-related projects and activities. About $134 million (39 percent) was approved by the agencies and provided through their normal funding procedures. As shown in figure 4, the federal government’s involvement includes safety- and security-related activities, transportation, housing and infrastructure support, venue building and construction, and staging operations during the Games. In total, not including additional security costs that may be incurred as a result of the terrorist attacks of September 11, 2001, the federal government plans to spend about $185 million on safety- and security-related activities. Such activities range from venue perimeter security projects and activities during the Games themselves to heightened security-related activities of individual agencies necessitated by the Games. For example, the General Services Administration (GSA) plans to spend about $1.6 million to protect its facilities during the Games. These are funds that GSA would not have had to spend were it not for the Olympic Games. The next largest amount of federal funding and support is about $106 million for transportation projects. The Department of Transportation plans to spend this amount in part to provide a temporary spectator transportation system. This system will consist of Salt Lake City transit buses and drivers and borrowed transit buses and drivers from other cities across the United States, bus maintenance, construction and operation of park-and-ride lots, and loading and unloading facilities. In addition, the Department of Transportation is planning to provide an additional $25 million, of the total $27 million allocated for venue construction, to support the building and construction of access roads to certain venues for the Games. An estimated $19 million in federal funds is also being provided to support staging-and-operations activities during the Games. The Department of Housing and Urban Development is providing an estimated $4 million for Salt Lake City redevelopment projects and temporary housing for the athletes participating in the Games. Appendix III lists the specific federally sponsored Olympic-related projects and activities and the amounts of federal funding and support for each. Los Angeles, then a metropolis of more than 11 million people, hosted the 1984 Olympic Games. According to information compiled by a USOC official, about 7,078 athletes from 140 nations participated in 221 sporting events at 27 venues for an audience of an estimated 8 million visitors to the Olympic Games. At that time, the Paralympic Games were not held along with the Olympic Games. As shown in figure 5, the reported total direct cost to plan and stage the 1984 Games was approximately $707 million. Of this amount, LAOOC reported providing about $629 million (89 percent) of the total direct cost for the Games. The remaining approximately $78 million of the total direct cost for planning and staging the Games, as we reported in September 2000, was provided by the federal government through the departments of Agriculture, Commerce, Defense, Justice, State, Transportation, Health and Human Services, the Treasury, and Veterans Affairs, as well as the Federal Communications Commission and the U.S. Information Agency. Although data on California and Los Angeles government funding and support for the 1984 Games were not available, according to the former LAOOC Vice-President for Government Relations for the 1984 Games, state and local funding was minimal. According to this official, Los Angeles voters passed a charter amendment in November 1978 prohibiting any capital expenditure by the city on the Olympic Games that would not, by binding commitment, be reimbursed. As noted in our September 2000 report, Los Angeles city officials believed that the host cities for Olympic Games held before 1984 often overextended themselves by trying to complete state-of-the-art Olympic venues and related capital improvement projects. Such actions, in their view, pushed those host cities into debt that remained long after the Games. As a result, city officials decided that they (1) would not undertake any new construction or capital improvements specifically for the Olympic Games and (2) would encourage spectators to use the transit or bus systems in place at the time or simply to drive their cars to the events. Figure 5 also shows that the approximately $78 million in federal funding and support represented about 11 percent of the total cost for projects and activities related to the Games. As shown in figure 6, about $74 million of the federal expenditures was used to support safety- and security-related activities for the Games. The remaining $4 million was used for staging- and-operations activities during the events. Of the $78 million total, Congress specifically designated about $76 million for mostly security- related projects and activities, and $2 million (3 percent) was approved by the federal agencies and provided through their normal funding procedures. Appendix IV lists the specific federally sponsored Olympic- related projects and activities and the amounts of federal funding and support for each. Atlanta, GA, is a large metropolitan area that had a population of more than 3.4 million in 1996 when it served as the host city for the Summer Olympic Games. According to information compiled by a USOC official, about 10,332 Olympic athletes from 197 countries participated in 271 sporting events at 29 venues, for an audience estimated at 8.3 million people. Also, 3,310 Paralympic athletes from 104 countries participated in sporting events at 16 venues. As shown in figure 7, the total direct cost for planning and staging these Games was about $2.4 billion. According to the information compiled by SLOC officials, of the $2.4 billion, the Atlanta Committee for Olympic Games (ACOG) and the Atlanta Paralympic Organizing Committee (APOC) contributed nearly $2 billion (82 percent) for the 1996 Games. ACOG- and APOC-funded projects and activities included transportation, safety and security, Paralympic operations, temporary and permanent facilities, and telecommunications. According to information from SLOC, local governments where the various venues were located during the Games contributed about $234 million (10 percent), which was used to help construct some of the facilities used to support the Games. The federal government’s share of the total cost to plan and stage the event, as we reported in our September 2000 report, was about $193 million, or 8 percent of the approximately $2.4 billion in total direct costs. Of the approximately $193 million provided by the federal government, $86 million (45 percent) was specifically designated by Congress for Olympic- related projects and activities and $106 million (55 percent) was approved by the agencies and provided through their normal funding procedures. Similar to previous Olympic Games, ensuring adequate safety and security was a primary concern of federal officials at the Games in Atlanta. As shown in figure 8, safety- and security-related projects related to the Games represented about $101 million (52 percent) of the federal government’s total direct cost. The federal agencies providing safety- and security- related funding and support included the departments of Agriculture, Defense, Health and Human Services, the Interior, Justice, State, Transportation, the Treasury, and Veterans Affairs. Funding and support was also provided by the Corporation for National and Community Services, the Federal Emergency Management Agency, the Federal Executive Board, and the Environmental Protection Agency. About $68 million (36 percent) of the $193 million in federal expenditures was used for venue construction and staging operations during the Olympic events. For example, approximately $18 million was used to construct the Whitewater Rapids Venue, and approximately $5 million was used for the pre-trial and Olympic Whitewater Rapids events operations during the Games. Transportation represented about $21 million (11 percent) of the federal funds expended on the Games, housing and infrastructure projects represented $2 million (1 percent), and venue construction represented about $36 million (19 percent). We provided copies of a draft of this report to the heads of OMB, SLOC, and USOC and to former officials of LAOOC for their review and comments. Additionally, for their review and comments, we provided to each of the federal agencies listed in Appendix III copies of a draft of their reported figures regarding (1) the amount of federal funding and support and (2) the applicable projects and activities for the planned 2002 Olympic and Paralympic Games at Salt Lake City, UT. We received oral comments from agency-designated officials or audit liaisons at OMB and most of the federal agencies, and from the former LAOOC Vice-President for Government Relations. Generally these officials had no comments, or they provided technical changes—to correct the reported amounts of federal funding and support provided for the Olympic Games, or to improve clarity—which were made where appropriate. We also received written comments from the president and chief executive officer of SLOC, which generally agreed with our report. Briefly, he stated that our report accurately reflected the growth of the Olympic and Paralympic Games during the past 20 years and pointed out that the increase in the federal government’s share of the cost occurred in traditional areas of government functions, security and transportation, while federal expenditure on nongovernment functions, such as venue construction, had significantly decreased. He also cited two significant factors, outside the scope of our work, that contributed to the growth in cost: the technological advances in measuring and in broadcasting the results of the competitions. Finally, he explained that one of his top priorities is to help reverse the trend of the Games to be “bigger and better” than those before, and that he plans to make a series of recommendations to the International Olympic Committee president on reducing the scope and controlling the growth in cost for future Olympic Games. Unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 15 days from the date of this report. We will then send copies of this report to Senator Robert C. Byrd, Chairman of the Senate Committee on Appropriations; Ted Stevens, Ranking Minority Member, Senate Committee on Appropriations; and C. W. Bill Young, Chairman, and David R. Obey, Ranking Minority Member, of the House Committee on Appropriations. We are also sending copies to Senators Fritz Hollings, Chairman, and John McCain, Ranking Minority Member, of the Senate Committee on Commerce, Science and Transportation; and Representatives W. J. Billy Tauzin, Chairman, and John D. Dingell, Ranking Minority Member, of the House Committee on Energy and Commerce. We are also sending copies of this report to Senator Orrin Hatch and Representatives James V. Hansen, Jim Matheson, and Christopher Cannon, of Utah. Copies of this report will also be sent to the Director of OMB; the secretaries of the departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, the Interior, Housing and Urban Development, Labor, State, Transportation, the Treasury, and Veterans Affairs; and the Attorney General. We are also sending copies to the heads of the Environmental Protection Agency, Federal Communications Commission, Federal Emergency Management Agency, General Services Administration, and Tennessee Valley Authority; and to the Postmaster General. We will also make copies available to others upon request. Major contributors to this report included Sherrill Johnson, Michael Rives, Frederick Lyles, Melvin Horne, and Michael Yacura. If you have any questions, please contact me at (202) 512-8387 or ungarb@gao.gov. As discussed in this report, the objectives of this assignment were to determine: the total direct cost of planning and staging the Winter Olympic Games held in 1980 at Lake Placid, NY; and the Winter Olympic Games and Paralympic Games planned for 2002 at Salt Lake City, UT; the Summer Olympic Games held in 1984 at Los Angeles, CA; and the Summer Olympic Games and Paralympic Games held in 1996 at Atlanta, GA; the total direct government funding and support at the local, state, and federal levels, where available, for each of these Games; how the federal funding and support were used; a complete roster of all the reported projects and activities for each of the Games; and the amount of federal funds and support specifically designated by Congress for Olympic-related purposes, and the amount of federal funding and support approved by the agencies and provided through their normal funding procedures for each of these Games. To respond to the first objective, because there is no central source for the needed information, we obtained data on the costs incurred by the principal parties that funded and supported these events. These parties included (1) applicable Olympic Games Organizing Committees that are private organizations established by the host cities to plan and stage the Games; (2) state and local governments associated with the designated host cities for the Games; and (3) federal government agencies. Our primary sources of information included: Salt Lake City Organizing Committee (SLOC) officials, who are currently responsible for planning and staging the Games planned for 2002 at Salt Lake City, and Olympic organizing committee reports following the 1984 Olympics and the1996 Olympic Games and Paralympic Games, which provided financial statements showing the total amount of the private- sector costs for each of the Games; Utah and California state and local government officials cognizant of their respective state and local governments’ funding and support for the Games held in Salt Lake City and in Los Angeles; and Department of Commerce report published in 1982 after the 1980 Winter Olympic Games; our previously published report regarding federal funding and support for the 1984 Olympics and the 1996 Summer Olympic and Paralympic Games; and OMB and 22 federal organizations, including the U.S. Postal Service, for the planned 2002 Winter Olympic and Paralympic Games at Salt Lake City. The federal agencies included the Department of Agriculture, Department of Commerce, Department of Defense, Department of Education, Department of Energy, Department of Health and Human Services, Department of Housing and Urban Development, Department of the Interior, Department of Justice, Department of Labor, Department of State, Department of Transportation, Department of the Treasury, Department of Veterans Affairs, Environmental Protection Agency, Federal Communications Commission, Federal Emergency Management Agency, Federal Executive Board, General Services Administration, National Aeronautics and Space Administration, Social Security Administration, and U.S. Postal Service. To respond to the second objective, we also contacted state and local government officials associated with the designated host city for the Games, and we made inquiries of the 22 federal organizations listed above. The effort to identify federal funding and support was aided considerably by OMB’s implementation of our past recommendation to require a consolidated reporting of federal agency funding and support for the Olympic Games. Specifically, the President’s 2002 Budget listed for the first time all federal Olympic spending in one table, which identified the federal agencies and the amounts spent or planned to be spent for the 2002 Games in Salt Lake City. We began with these listed agencies and obtained the necessary supporting information to verify or update their reported figures. We relied upon the agency officials’ reports of (1) funding and support, and (2) projects and activities directly related to planning and staging the Olympic Games or Paralympic Games. To respond to the third objective, we relied upon information we and other agencies previously reported pertaining to the amount of congressionally designated and agency-approved federal funding and support for the Games held in 1980 at Lake Placid, NY; in 1984 at Los Angeles, CA; and in 1996 at Atlanta, GA. We depended upon the agencies to update the amount of congressionally designated and agency-approved federal funding and support, and to report this information to us for the planned 2002 Games at Salt Lake City, UT. At each agency we obtained, to the extent possible, supporting information for (1) the agency’s reported federal funding and support, and (2) the agency’s identification and description of its Olympic-related projects and activities. The figures reported by the agencies for the planned 2002 Games at Salt Lake City, UT, included all funding and support as of July 31, 2001. We did not independently verify the data but relied upon each agency to make its own determination as to (1) the funding and support, and (2) the project or activity’s direct relationship to planning and staging the Olympic and Paralympic Games. We conducted our review from August 2001 to October 2001, in accordance with generally accepted government auditing standards. Totals may not add due to rounding. Description of project or activity Training exercises, travel, vehicle lease, utilities, etc. Assist with implementing a master safety plan Executive Office of U.S. Attorneys Salary and other costs for staff Grant to UCAN to upgrade security and communication Diplomatic security: Department of State will assist in providing protective security details to foreign dignitaries below the Head of State level, as well as establishing a diplomatic security presence in Salt Lake City. Description of project or activity Olympic Transportation Planning ($1.4 million of these funds were used for two temporary park-and-ride lots) Temporary RTRs: communications facilities in venue areas east of mountains Aviation Security Operations Center - (joint operations with UOPSC, Customs, USSS) Provo ASR - temporary radar Automation upgrades at Salt Lake TRACON Telecommunications support additional circuits required Temporary air traffic control towers at outlying airports Physical security upgrades to FAA facilities ALSF 2 - Salt Lake Int'l Airport approach lighting system Olympic Aviation System Plan; grant to Wasatch Front Regional Council for development of Olympic Planning Study (airports) Description of project or activity Aviation systems and standards (flight procedures) Establish Olympic Transportation Working Group (OTWG) and complete several Venue Transportation Integration Plans (VTRIPS) Construct additional storage tracks at Light Rail Vehicle Storage Facility and purchase/install automatic electric switch on the North/South LRT Line at 100 S. Main Street. Construct Silver Creek Jct. park-and- ride; purchase venue load and unload equipment; and construct Silver Creek Jct. bus garage (supplemented by UT-0-0039) Description of project or activity Construct busway at Snowbasin, bus garage at Silver Creek Jct., and Olympic Park park-and-ride lot (supplemental to UT-03-0040) Personnel costs are generally not included in these amounts. Totals may not add due to rounding. Congress appropriated $76,170,000, and DOD spent $48,750.000. The unused funding authority was returned to the U.S. Treasury. Description of project or activity Donated excess supplies for Paralympics Safety-and security-related services for Paralymic events Olympic venue bike path construction Paralympics: loan of EPA employees Salary for safety- and security- related services (federal employees) The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 (automated answering system).
Since 1980, the Winter and Summer Olympic and Paralympic Games hosted in the United States have increased in size and magnitude, as have the total direct costs to plan and stage them. The reported direct costs to plan and stage the games discussed in this report ranged from $363 million to more than $2.4 billion. Although the total dollar amount of federal funding and support has increased, the total federal share of the reported total direct costs to plan and stage the games has decreased. Since 1980, the amount of funding and support provided by state and local governments has increased. Generally, federal funding and support for the total direct costs of each of these games was either specifically designated by Congress or approved by the federal agencies.
A severe drought in many Western states set the stage for an early and intense fire season. By mid-June, several major fires were burning, including the Rodeo-Chediski Fire in Arizona and the Hayman Fire in Colorado. These fires siphoned both aerial and ground firefighting resources from the Pacific Northwest, including helicopters, air tankers, agency and contract fire engines, smoke jumpers, highly trained agency crews (called “hot shot” crews), and contract firefighting crews. By June 21, the National Interagency Fire Center (NIFC) in Boise, Idaho, was reporting a preparedness level of 5, the highest level, indicating that the nation had the potential to exhaust all agency firefighting resources. When lightning storms passed through California and Oregon on July 12 and 13, igniting hundreds of fires, including the Biscuit Fire, more than 30 large fires were already burning across the nation and firefighting resources available for initial attack were limited. The Biscuit Fire began as five separate fires in the Siskiyou National Forest in southwest Oregon. The Siskiyou Forest, encompassing more than 1 million acres, contains diverse topography, including the Siskiyou Mountains, the Klamath Mountains, the Coast Ranges, and the 180,000-acre Kalmiopsis Wilderness. Steep terrain, together with many roadless areas, presented accessibility and logistical challenges for managers directing fire suppression efforts at the Biscuit Fire. To complicate the situation, the fires were also located almost 30 miles apart. As the fires rapidly grew during late July and early August, the southern fire burned south and crossed the state border into the Six Rivers National Forest in Northern California. While the Biscuit Fire burned primarily federal forestland, by early August, it threatened a number of communities in Oregon and California. Figure 2 shows Biscuit 1 burning on a steep hill on July 14, 2002. To understand the response to the Biscuit Fire, it is important to understand the phases of fire suppression efforts and the nature of interagency wildland firefighting. On a large wildland fire, such as the Biscuit Fire, fire suppression efforts generally fall into two phases. The initial attack phase is defined as efforts to control a fire during the first operational period, usually within 24 hours. Local fire managers direct these initial firefighting efforts. In fiscal year 2002, firefighters were successful in suppressing about 99 percent of wildland fires in federal, state, and local jurisdictions during the initial attack phase. If a fire has not been contained or will not be contained during this period or additional firefighting resources are ordered, firefighting efforts move into the extended attack phase. In this phase, key fire management officials prepare a Wildland Fire Situation Analysis that describes the situation and objectives, and compares multiple strategic wildland fire management alternatives. Additional management and firefighting resources may be requested. Figure 3 shows an example of a firefighting organization involved in an extended attack, although the specific positions filled depend on the complexity of the fire. The Forest Service and its interagency firefighting partners employ an incident management system that is designed to provide the appropriate management and leadership team capabilities for firefighting efforts. The complexity of the fire determines the type of leadership team and firefighting resources assigned. There are five types of incidents—type 1 being the most complex (see table 1). For example, to manage a type 5 incident, the incident commander may be a local district employee with adequate experience to direct initial attack efforts on a small fire with two to six firefighters. In contrast, for a type 1 incident, such as the Biscuit Fire, the incident commander is just one member of a highly qualified management team. While both type 1 and type 2 incident management teams have a standard composition of 28 members, type 1 team members receive additional training and experience in handling the most complex incidents. Incident management teams manage a variety of firefighting resources. These include highly trained “hot shot” crews, agency and contracted crews, air tankers, helicopters, fire engines, and bulldozers. Federal agencies, such as the Forest Service, provide a large number of the personnel that work on fires. These federal agencies rely on a “militia” strategy to fight wildland fires whereby personnel within each agency are trained to serve in fire suppression or support roles, when needed and requested, in addition to performing their normal day-to-day work responsibilities. However, many factors, including past downsizing within the federal government, have reduced the pool of employees qualified to work on fires. Increasingly, private contractors provide crews and firefighting equipment, including engines and helicopters. National policies and procedures were in place and provided the framework to guide personnel in the local dispatch center in Grants Pass, Oregon, who were responsible for acquiring firefighting resources for the Biscuit Fire. Guided by these policies and procedures, dispatch centers use a three-tiered dispatching system—local, regional, and national—to locate and send resources to wildland fires. During the initial attack phase of a fire, these policies also permit dispatch centers to contact neighboring dispatch centers directly for resources, including resources in adjacent regions, before elevating resource requests to a higher level. For the Biscuit Fire, the Grants Pass dispatch center did not have sufficient resources available and took steps to locate needed resources to fight what began as five separate fires in the Siskiyou National Forest. Grants Pass dispatchers contacted their regional dispatch center in Portland about the availability of resources, including helicopters, on the first day of the Biscuit Fire. In making resource inquiries, Grants Pass personnel did not request resources from the Fortuna dispatch center, a neighboring center located in the adjoining dispatch region in Northern California. Grants Pass personnel believed that Fortuna had no available resources, based on daily fire situation reports, because Northern California was also fighting numerous fires. Concerns were later expressed by state and local officials in California that a helicopter, under the control of the Fortuna dispatch center, was fighting fires in Northern California, just across the border from the first of the five Biscuit fires, and could have been provided to fight it. Forest Service and state dispatchers working in the Fortuna dispatch center expressed differing viewpoints on whether they could have provided a helicopter for the Biscuit Fire, had Grants Pass requested it. The National Interagency Mobilization Guide includes policies and procedures to help ensure the timely and cost effective mobilization of firefighting resources. Federal, state, tribal, and local firefighting agencies share their firefighting personnel, equipment, and supplies, following a standardized process to coordinate responses to fires and mobilize resources. When local dispatch center personnel are notified of a fire, they send available firefighting resources based on a preplanned response. If fire managers need additional resources, they send a request to the local dispatch center identifying the type and amount of resources needed. If the dispatch center personnel cannot fill a request locally, they can forward the request to the responsible regional dispatch center. If the regional center cannot fill the request with resources from within the region, the request is sent to the National Interagency Coordination Center in Boise, Idaho, the primary support center for coordinating and mobilizing wildland firefighting resources nationally. When requests exceed available resources, the fires are prioritized, with those threatening lives and property receiving higher priority for resources. To facilitate the swift suppression of new fires—called the “initial attack” phase of a fire—local dispatch center personnel can first contact neighboring dispatch centers directly, including those in adjacent regions, before elevating resource requests to the regional or national level. For resource sharing between neighboring dispatch centers in adjoining regions, a formalized agreement, such as a mutual aid agreement and local operating plan, is needed. Existing policies and procedures encourage the sharing of resources between local dispatch centers. The national guidance states that local dispatch centers should use mutual aid agreements whenever possible to obtain resources directly from neighboring units. In the case of the Biscuit Fire, a regional mutual aid agreement between the state of California and federal agencies in California, Nevada, and Oregon establishes the protocols for interagency coordination and cooperation for wildland fire protection in California, which includes the areas along the Nevada and Oregon borders. Local, state, and federal agencies jointly develop local operating plans that identify the specific resources that can be shared under the mutual aid agreement and the provisions for cost sharing. One of these plans allows the Grants Pass dispatch center in Oregon and the Fortuna dispatch center, located in the neighboring region in Northern California, to request resources directly from each other. (See fig. 4.) The Grants Pass dispatch center operates under a contract between the Siskiyou National Forest and the Oregon Department of Forestry (ODF). ODF operates and staffs the center, and the Forest Service reimburses the department for a portion of the center’s operating costs, according to a Siskiyou official. The Fortuna dispatch center is operated by the Six Rivers National Forest and the California Department of Forestry and Fire Protection (CDF) and is staffed by personnel from both agencies. When the first two fires were found on the afternoon of July 13, 2002, the Grants Pass dispatch center did not have the firefighting resources needed locally to fight the fires. Many resources, including the helicopter normally stationed at Grants Pass, had been sent to other higher priority fires that were threatening lives and property. The fires, located in the Siskiyou National Forest, were initially small—two trees and 1 acre. Biscuit 1 was a few miles north of the California-Oregon border, and the Carter Fire was about 12 miles north of Biscuit 1. Figure 5 provides information about the initial attack on the fires. Biscuit 1 was the first fire found. At 3:17 p.m. on July 13, a Siskiyou Forest Service aircraft being used by ODF personnel to perform reconnaissance spotted Biscuit 1. The aerial observer reported the fire to Grants Pass dispatch. At 3:53 p.m., air reconnaissance spotted the Carter Fire 12 miles north of Biscuit 1. Soon after Grants Pass and Siskiyou officials became aware of the first fire, firefighting personnel in California also spotted the fires. At 3:51 p.m., a CDF reconnaissance airplane, assisting the Six Rivers National Forest, spotted smoke columns to the north while circling a fire in Northern California. The airplane was directing the activity of a CDF helicopter and crew of six firefighters assigned to a fire in the Six Rivers National Forest in Northern California, just south of the California-Oregon border. At the request of a Six Rivers National Forest official, the CDF airplane flew north to investigate the smoke in Oregon. Reconnaissance personnel reported Biscuit 1 and the Carter Fire to the Six Rivers official and to the Fortuna dispatch center—an interagency center staffed by CDF and Six Rivers National Forest personnel. Since the helicopter and crew were close to finishing up their assignment in California, the CDF reconnaissance personnel suggested to Fortuna dispatch that the helicopter and crew could next take action on the fire in Oregon. The Forest Service dispatcher at Fortuna informed the reconnaissance airplane to continue patrolling while, in accordance with normal dispatching protocol, Fortuna notified the Grants Pass dispatch center about the fire. Grants Pass told Fortuna that it already had reconnaissance aircraft in the area. Because of the lack of communications between the CDF and Oregon aircraft, the Fortuna dispatch center advised the CDF airplane to leave the area to ensure air safety. At 4:15 p.m., CDF air reconnaissance reported another three lightning-caused fires in Northern California, and at 4:36 p.m., the CDF helicopter and crew were sent to fight these fires. Figure 6 shows the new fires found in southern Oregon and Northern California on July 13 and 14. At the request of Siskiyou National Forest officials, Grants Pass dispatch personnel began to try to locate needed firefighting resources. At 4:30 p.m., Grants Pass dispatch personnel requested a helicopter (with a bucket for water drops) from the dispatch center in Portland, Oregon. Shortly after 5 p.m., Siskiyou officials asked the Grants Pass dispatch center to check on the availability of smoke jumpers, rappellers, helicopters, and air tankers. Dispatchers checked with the regional dispatch center in Portland and were told that no helicopters or air tankers were available. Dispatchers contacted the Central Oregon dispatch center and were told that no smoke jumpers or rappellers were available for the Biscuit Fire for 48 to 72 hours because of higher priority fires elsewhere. Grants Pass personnel relayed this information to Siskiyou officials. By the next morning, July 14, the fires had grown. Shortly after 10:10 a.m., Siskiyou Forest officials directing firefighting efforts departed on a reconnaissance flight. They flew over the Carter Fire and decided to staff this fire as soon as possible because of its proximity to a trail that would allow access to the fire and because there were natural safety zones for firefighters. A type 2 crew began to hike to the Carter Fire later that afternoon. Siskiyou officials next flew over Biscuit 1 and found it was about 7 acres. They also spotted a third fire, named Biscuit 2, which was about 20 acres and located about one-half mile from Biscuit 1. Siskiyou Forest officials believed that the Biscuit 1 and 2 fires would burn together in the afternoon and had a high probability of getting significantly larger. Due to safety concerns, limited accessibility, wind and fire behavior, and insufficient firefighting resources, forest officials decided not to staff Biscuit 1 and Biscuit 2 at that time. Siskiyou officials requested that Grants Pass dispatch personnel order additional firefighting resources, including a type 2 incident management team, air tankers, and other equipment for the fires. However, due to higher priority fires elsewhere, many of these resource orders could not be filled by the regional dispatch center in Portland for several days or longer, and the request for air tankers was never filled. Shortly before noon, a CDF helicopter and crew were on duty in Northern California performing reconnaissance and responding to reported fires. A Six Rivers Forest official helping to direct the helicopter crew’s activities requested that the crew check the fire in Northern California they had worked the previous day to ensure it was out. She also requested that the crew, on the way, fly by a campground near the Biscuit 1 and Biscuit 2 fires to ensure no campers were there. None were seen. The helicopter then landed near the site of the fire they had worked the previous day in Northern California, and the crew hiked to the fire to ensure it was extinguished. At 2:17 p.m., the CDF helicopter reported the fire was cold, and the helicopter was assigned to another incident in California. At 6:40 p.m., in response to the July 13 request for a helicopter, a regional dispatch official in Portland working with officials in the Northern California regional dispatch center in Redding, the Fortuna dispatch center, and the Grants Pass dispatch center arranged for a CDF helicopter with a water bucket to respond to the Biscuit fires, as allowed under the provisions of the mutual aid agreement. The helicopter had been working on fires in Northern California. However, a few minutes later, as the helicopter was en route to Oregon, Siskiyou officials canceled the request because at that point the fires had spread to more than 300 acres, and officials stated that the helicopter would have been of limited use on a fire that size. Officials explained that without ground resources available to fight the fire, water drops alone are usually of limited value. On July 15, the last two fires that would make up the Biscuit Fire— Sourdough and Florence—were discovered. The Sourdough Fire was found near Biscuit 1 and Biscuit 2. The Florence Fire was located almost 30 miles north of these fires. Siskiyou officials requested the Grants Pass dispatch center to order numerous resources on July 15, including helicopters, engines, and crews. Most of these requests were not able to be filled for several days or longer. By July 16, the Northwest’s Multi-Agency Coordination Group in Portland, Oregon, which is responsible for prioritizing fires and allocating firefighting resources in the region, ranked the Biscuit Fires as priority 12 out of 18 large fires in the region. The Florence Fire went on to burn almost 250,000 acres before merging with the other fires on August 7. Concerns were later expressed by state and local officials in California that a CDF helicopter, fighting fires in Six Rivers National Forest on July 13, just across the state border from Biscuit 1, could have been provided earlier to assist on the Biscuit Fire. Grants Pass personnel explained that they did not request assistance from the Fortuna dispatch center on July 13 because, based on the daily fire situation reports, they believed no resources would be available due to the fires in Northern California. California was also fighting numerous fires ignited by the same lightning storm that passed through Oregon. When we asked the Fortuna dispatch center about this issue, the Forest Service and state of California dispatchers working there expressed differing viewpoints on whether they could have provided a helicopter on the first day of the Biscuit Fire if such a request had been made. A CDF dispatcher working at the Fortuna dispatch center said that if the Grants Pass dispatcher had requested the helicopter at that time to launch an initial attack on the Biscuit Fire, he believed he would have provided it to them. However, a Forest Service official also working at Fortuna to dispatch firefighting resources had a differing opinion, saying that even if Fortuna had sent the helicopter to Oregon, he believes that it likely would have been diverted back to California to suppress other higher priority fires in Fortuna’s direct protection area. Because Grants Pass dispatch did not request assistance from Fortuna on the first day of the Biscuit Fire, there was no discussion at that time about whether this would have been the best use of the helicopter. In the final analysis, it is unclear what the outcome of such a request would have been. Following the initial attack of the Biscuit Fire, delays in obtaining needed personnel hampered efforts to effectively fight the Biscuit Fire in three key ways. First, neither a management team with adequate experience to strategically plan and manage firefighting efforts nor sufficient highly trained crews to carry out the plans were initially available for the Biscuit Fire due to their need on higher priority fires. By the time a highly experienced management team became available and was assigned to the Biscuit Fire in late July, the fire had increased from a few hundred acres to almost 200,000 acres. Second, key supervisors needed to direct the tactical efforts of firefighting crews and equipment were unavailable at critical points in July and August as the fire was growing in size and intensity. As a result, the ability to implement aggressive fire suppression tactics was compromised due to concerns about the safety of fire crews. Finally, some fire support positions, such as contracting technical representatives, also were unavailable to play a key role in monitoring contracted crews. Siskiyou Forest officials directing the firefighting effort had difficulty obtaining both a highly trained incident management team with the necessary level of experience to plan and direct firefighting efforts on the Biscuit Fire, as well as needed crews to carry out such plans. Within one day after the initial fires were discovered, Siskiyou district fire managers determined that the fire would likely grow larger and require a more experienced incident management team to direct the firefighting effort than was currently available at the forest. They requested a type 2 team and numerous type 1 crews and other firefighting resources during the initial days. However, because of the high level of fire activity both in the Pacific Northwest, as well as in other Western states, higher priority fires meant that no type 2 incident management teams or highly experienced crews were immediately available for assignment to the Biscuit Fire. Siskiyou officials’ request for a team was not filled for 7 days, by which time the acres burned had grown from about 700 to more than 5,000. When a type 2 management team assumed command of the Biscuit Fire on July 21, they quickly realized that the fire had grown beyond the scope of a type 2 team and that a more experienced type 1 team was needed to handle the increasingly complex situation. A type 1 team was ordered on July 22, but the Northwest’s Multi-Agency Coordination Group in Portland prioritized the Biscuit Fire, on this day, as priority 6 of 15 fires burning in the region. This was largely because the Biscuit fires were not threatening lives and property. As a result, most requests for management, crews, and equipment for the Biscuit Fire went unfilled. In the case of the management team, rather than assigning the type 1 team requested, two other type 2 management teams were assigned in late July to assist in managing firefighting efforts on the southern fire, including the portion that had crossed into California. As these teams tried to direct the firefighting efforts of crews and equipment, the fires—especially the Florence Fire in the north—grew rapidly. Winds and low humidity contributed to the fires’ intensity. Between July 27 and August 1, the size of the Florence Fire grew from 18,000 acres to 164,000 acres, and the size of the southern fire—the Sour Biscuit Fire—grew from 7,000 acres to 38,000 acres. Finally, on July 31, 9 days after first requested, two type 1 management teams arrived and assumed command of fighting the Biscuit Fire. Type 1 firefighting crews and many other resources continued to be listed as critical resource needs throughout August. Figure 7 shows the dates management teams were requested or assumed command, the personnel and equipment assigned to the fire, and the growth of the fires. The first type 2 incident commander assigned to the Biscuit Fire said that not having a type 1 management team and other needed resources slowed the progress of the firefighting effort. He said that while he understood the Portland group’s decision not to assign a type 1 team at that time, it nonetheless was difficult to effectively fight fires located almost 30 miles apart with the limited resources available. The available management and other resources were split between the fires. He added that the type 2 team’s firefighting strategies were the same as those of a type 1 team— initially to improve access to the site of the fires. However, without needed resources, tactics had to be modified, and progress was slower. His operations section chief said that the complexity of the fire, coupled with the lack of a type 1 management team, type 1 crews, and equipment, meant that firefighters could not attack the fire directly and had to use more indirect methods, such as using bulldozers to build a firebreak, in an effort to slow the spread of the fire. The incident commander and operations section chief said that generally, when a fire is a high priority and qualifies for a type 1 management team, it is also more likely to be assigned other needed firefighting resources. In the case of the Biscuit Fire, a type 1 team and additional type 1 crews and other resources might have improved the chances of stopping the fire before it burned southeast to an area called Oak Flat, according to the incident commander. However, Forest Service headquarters officials said that in a severe fire season such as 2002, difficult decisions have to be made about where to assign limited resources. Fires are prioritized, and those presenting the greatest threat to life and property receive firefighting resources first. Even as top management teams and increasing numbers of crews and equipment were assigned to the Biscuit Fire, some critical supervisory positions were not filled as quickly as needed or remained unfilled at the end of the fire. In all, over 200 requests for supervisory positions were never filled. The primary cause for reduced availability of supervisory personnel was the demand for these staff to work more than 30 other large fires throughout the nation, including other fires in Oregon. Fire managers and a Forest Service review of the Biscuit Fire stated that delays in obtaining needed supervisors affected their ability to implement aggressive fire suppression tactics or use equipment until sufficient supervision became available. Federal officials, however, did take some action to mitigate these problems, including obtaining personnel from Australia and New Zealand to fill certain supervisory positions. Officials emphasized that the difficulties in obtaining personnel to serve as supervisors was not a problem unique to the Biscuit Fire and that such issues have affected numerous fires in recent years. Biscuit Fire managers identified a number of key supervisory positions that went unfilled for a period of time in July or August when the fire was rapidly growing and that were critical for effective fire suppression efforts. These included government managers of contracted helicopters and bulldozers (known as helicopter managers and dozer bosses); government supervisors directing tactical operations for a division or segment of crews (known as strike team leaders); and government supervisors (known as division supervisors) directing the activities of strike team leaders. Although it was not possible to measure the specific effect on fire suppression efforts, such as the number of additional acres burned, from unfilled supervisory positions, Biscuit Fire managers provided a number of examples to illustrate the difficulty they faced in carrying out plans without sufficient supervisors for aircraft or for equipment and firefighting personnel. For example, an incident commander and an incident business advisor working at the fire said that some bulldozers sat idle for a few days and could not be used on fire suppression efforts because of the lack of a dozer boss to manage and direct the equipments’ use. Interagency requirements state that one dozer boss is required to safely manage the operations of each dozer. However, dozers and dozer bosses are ordered separately and may arrive at a fire at different times. If a dozer arrives first, it may sit idle until a dozer boss is available to supervise its operation. According to an interagency Fire and Aviation Safety Team Review, it was appropriate to not use all available resources, including dozers, if safety would have been compromised because of insufficient supervision. In the case of helicopters, fire officials told us that for one or two days several helicopters may have sat idle due to insufficient helicopter managers. However, fire records indicate, and agency officials agreed, that the major reason helicopters did not fly was due to poor visibility as a result of weather or smoke. To minimize the impact of helicopter manager shortages, fire officials used a waiver system so that, under certain circumstances, one helicopter manager could manage two type 1 or type 2 helicopters rather than only one, as permitted by interagency policy. Using this waiver process, six waivers were granted for helicopter managers working at the Biscuit Fire. In addition, National Interagency Fire Center officials requested and received numerous supervisors from Australia and New Zealand, including eight helicopter managers. The inability to fill government strike team leader positions also resulted in delays in fire suppression actions, according to a Biscuit Fire operations manager. In one effort to mitigate this shortage, three qualified staff were transferred from a hot shot crew to work as strike team leaders supervising contracted crews, according to the division supervisor. The supervisor said, however, this move lowered the firefighting effectiveness of the hot shot crew. In another case, a shortage of division supervisors resulted in the inability to provide adequate supervision in two of the four fire zones for about one week, according to an operations section chief. The lack of needed supervision resulted in the inability to use crews to carry out planned actions, and as a result, fire suppression progress was delayed. Our findings on the reduced availability of personnel to fill critical staff positions were confirmed by an internal Forest Service review of the Biscuit Fire as well as Forest Service reviews of other wildland firefighting efforts. The Forest Service review of the Biscuit Fire concluded that opportunities to halt the spread of the fire had to be abandoned because of limited resources, and as a result, the fire grew larger and threatened more communities on both the western and eastern perimeters. The Forest Service’s January 2000 report, An Agency Strategy for Fire Management, highlighted the shortage of federal staff for both fire suppression and fire support positions. Also, during July 2002, the Northwest Multi-Agency Coordination Group in Portland, Oregon, reviewed ongoing fires in the Pacific Northwest, including the Biscuit Fire, and the effects of the reduced availability of personnel to fill critical supervisory positions for fire suppression. The group noted that some crews and equipment had been suspended from fire suppression efforts because of a lack of appropriate supervision. Contracted resources played a key role in the Biscuit Fire—at its peak over 1,600 contracted firefighters and over 400 pieces of contracted equipment and helicopters were assigned to the fire. Interagency fire managers acknowledged, however, that there was little, if any, monitoring of private contractors to ensure that contracted crews assigned to the Biscuit Fire met established training and experience requirements. Instead, fire managers generally relied on contractors to certify that their crews met these requirements, as stated in their contract. Despite contractors’ assurances that their crew met all requirements, Biscuit Fire officials told us that some insufficiently trained or inexperienced contracted crews negatively impacted firefighting efforts because these crews were not always able to carry out planned operations. In contrast, contracted engines and other equipment had fewer problems. Fire managers participating on the Biscuit Fire said that poorly trained and inexperienced contracted crews presented significant operational concerns. They cited examples of contracted crews that were unable to carry out planned firefighting operations. Managers said that they postponed or changed some tactical firefighting operations because it was not safe using these crews in more aggressive fire operations. Crews that could not be used as planned were assigned minimal firefighting responsibilities, such as “mop up” activities at a considerable distance from intensive fire activity. Although the limitations on how the crews could safely be used likely affected progress of firefighting, the actual effect cannot be measured. Communication to and between crews also adversely impacted the use of contracted crews on the Biscuit Fire. There were instances where crew and squad bosses for contracted crews were unable to communicate in English with government supervisors, as required in the interagency crew agreement. The lack of fluency in English caused safety concerns and resulted in crews being assigned to far less technical tasks than planned. Fire managers told us that, even when assigned minimal fire tasks, some private crews required above normal supervision, which in turn resulted in supervisors having less time available to plan and implement higher priority fire suppression tasks. Under a cooperative arrangement between the federal government and the states of Oregon and Washington, ODF has oversight responsibility for private crew contractors in the Northwest. Typically, the monitoring of crew qualifications should take place before the start of the fire season. An ODF official, however, said that insufficient funding and personnel have resulted in few, if any, evaluations of crews’ qualifications prior to the start of the fire season. Alternatively, interagency support personnel, such as contracting officers or their technical representatives, can perform contract crew qualification assessments. We found that during the Biscuit Fire, however, these key support positions were identified as a critical, but unfilled, resource need. According to federal firefighting managers, about 90 individuals have been trained as technical representatives to work with firefighting management teams, but at any given time during recent fire seasons, only about 10 percent of these trained technical representatives were available to serve on incident management teams. The ODF official having oversight responsibility for contracted crews in the Northwest concluded that because of these shortages and the significant numbers of contracted crews, it is likely that there was minimal monitoring of contract crews at the fire. Finally, we noted that these shortcomings in the monitoring of contracted crews were not limited to the Biscuit Fire. The importance of monitoring crew training and experience was also cited in an interagency fire and aviation safety report issued in 2002. The review stated that deficiencies in the physical fitness and job skills of crews raised concerns about the validity of qualifications of some contracted resources. There are some differences in certification standards for personnel between state and federal wildland firefighting agencies, but these differences did not appear to have affected efforts to respond to the Biscuit Fire. In 1993, the National Wildfire Coordinating Group (NWCG)—an interagency group comprising federal and state representatives— established minimum training and experience standards for personnel assigned to fight interagency wildland fires outside their home region. These standards, which were updated in 2000, have been adopted by five federal land management agencies, including the Forest Service. Five of the seven states that we contacted in and around the Northwest Region have also adopted these standards as the minimum requirements for all their firefighting personnel. The Oregon Department of Forestry (ODF) meets these standards for personnel on interagency wildland fires outside the Northwest Region. For fires under state management and for interagency fires within the region, ODF has maintained its own certification standards. These standards are nearly identical to the 1993 version of NWCG standards. In 2000, NWCG added some course and experience requirements. ODF officials are currently working to incorporate many, but not all, of these changes into state standards. For example, ODF requirements for many positions rated type 2 or below will meet NWCG standards. For type 1 positions, including incident commander, some of the most advanced courses will not be required. An ODF official explained that he believed, for state-managed fires, these additional courses were not necessary. The California Department of Forestry and Fire Protection (CDF) has maintained its own firefighting certification system for its firefighting personnel. CDF shares many of the same standards as those established by NWCG, including the combination of coursework and experience requirements for firefighting certification, but requires state-specific courses for some positions. Under an agreement with federal land management agencies, California state personnel assigned to interagency fires in supervisory roles within the state are required to be certified to a level equivalent to NWCG standards. For national mobilization, NWCG requires that participating agencies certify that their personnel meet the established interagency qualification standards. In the case of California, CDF officials stated that state certification requirements meet or exceed the standards established by NWCG. In addition, National Interagency Fire Center officials said they have no concerns about the adequacy of the standards used by CDF. There was no apparent impact on the response to the Biscuit Fire as a result of different agency firefighting certification standards. As with other interagency fires, personnel that were dispatched to fight on interagency fires outside their home region were required to meet these standards. Within the Northwest Region, ODF maintains its own standards for state fires and interagency fires, although only NWCG-qualified personnel were dispatched to the Biscuit Fire, according to an ODF official. While CDF utilizes an independent set of requirements, NIFC officials said they had no concerns about the adequacy of the certification system used by CDF. In addition, for the portion of the Biscuit Fire that was located in California, CDF supervisory personnel assigned to the fire were required by agreement to be certified to a level equivalent with NWCG standards. Finally, our review of relevant documents and discussions with knowledgeable federal, state, and local officials did not identify any evidence that the differences between these systems created difficulties during the response to the Biscuit Fire. The cornerstone of wildland fire policy is interagency cooperation and coordination among federal, state, tribal, and local firefighting agencies. Central to that cooperation and coordination is a system that includes managers and personnel from many different agencies and that crosses jurisdictional boundaries. Such a system is facilitated by good communication between personnel at all levels to help ensure clarity of firefighting goals, strategies, and tactics. Communication is also important for those working in various dispatch centers to obtain firefighting resources. These personnel must communicate in a timely—sometimes immediate—fashion to other dispatch centers the resources they need to fight new or ongoing fires in their area. In the case of the Biscuit Fire, Grants Pass dispatch personnel did communicate resource needs to their regional dispatch center in Portland, but no resources were immediately available due to other higher priority fires in the region. However, personnel did not communicate the need for initial attack resources to a neighboring dispatch center in Fortuna, California, although this was an option available to Grants Pass personnel. Whether this would have resulted in any resources being provided for the initial attack of the Biscuit Fire is unclear because personnel in the Fortuna dispatch center disagree on whether any resources could have been spared, given that fires were also burning in Northern California at the time. Since no request was made, the priority of the Biscuit Fire relative to other ongoing fires within the Fortuna dispatch center’s direct protection area was not discussed on the first day of the Biscuit Fire, and the outcome of such a request, had it been made, remains unclear. We provided a draft of this report to the Secretaries of Agriculture and of the Interior for review and comment. The Forest Service commented that the report appears to be accurate and the agency generally agrees with its contents. The Forest Service’s comments are presented in appendix II. The Department of the Interior did not provide comments. As arranged with your offices, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to other interested congressional committees. We will also send copies to the Secretary of Agriculture; the Secretary of the Interior; the Chief of the Forest Service; the Directors of the Bureau of Land Management, the National Park Service, the Fish and Wildlife Service, and the Bureau of Indian Affairs; and other interested parties. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about his report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix III. To determine whether policies and procedures were in place for acquiring needed firefighting resources during the initial days of the Biscuit Fire, and the extent to which these policies and procedures were followed when the fire was first identified, we reviewed national policies and procedures that included the National Interagency Standards for Fire and Fire Aviation Operations and the National Interagency Mobilization Guide. We reviewed the interagency mobilization guides in 9 of the 11 regions. We also reviewed the local mobilization guide covering the Grants Pass dispatch center and the mobilization guides for three other local dispatch centers. We reviewed the mutual aid agreements governing resource sharing for the Siskiyou National Forest. We spoke with officials at the National Interagency Fire Center (NIFC) in Boise, Idaho; Forest Service headquarters in Washington, D.C.; Forest Service Regions 5 and 6 regional offices; Bureau of Land Management, Oregon State Office and the Medford District Office; the Siskiyou and Six Rivers National Forests; the Oregon Department of Forestry (ODF); and the California Department of Forestry and Fire Protection (CDF). We visited three dispatch centers in Oregon (the Grants Pass Interagency Fire Center, the Medford Interagency Fire Center, and the Northwest Interagency Coordination Center in Portland) and one in California (the Fortuna Interagency Emergency Command Center) to discuss dispatch center operations. We also reviewed Biscuit Fire records stored at Siskiyou National Forest headquarters in Medford, Oregon, and records kept at Fortuna, including resource orders and transcripts of key radio transmissions during the initial days of the Biscuit Fire. The Forest Service provided the data used to generate the fire progression maps. We took steps to assess the reliability of the mapping data and determined that it was sufficiently accurate for our purposes. To determine what resource management issues, if any, affected the ability of firefighting personnel to effectively fight the Biscuit Fire, we reviewed a variety of information, including resource orders and daily incident reports showing firefighting resources requested and obtained, incident action plans showing firefighting strategies and tactics, close-out reports discussing firefighting progress and problems, and Forest Service reviews of the Biscuit Fire. We interviewed a number of federal and state personnel knowledgeable about the Biscuit Fire, including officials from the Siskiyou and Six Rivers National Forests, ODF, and CDF, and the management teams and other key support staff that were assigned to the Biscuit Fire. We discussed resource management issues, their effect on the fighting of the Biscuit Fire, and the reasons for these issues or problems. We also reviewed assessments of other wildland fires to determine if the issues identified were limited to the Biscuit Fire or were more widespread. To determine what differences, if any, existed in key personnel certification standards at federal and state agencies involved in fighting wildland fires— particularly in Oregon—we reviewed the interagency qualification standards established by NWCG. We also contacted officials from Oregon and California, where the Biscuit Fire burned, and five other states—Idaho, Montana, Nevada, Utah, and Washington—to discuss the certification standards they use, and whether they differ from those established by NWCG. In addition, we reviewed state firefighting standards for ODF and CDF and compared them with those established by NWCG. To determine what effect any differences may have had on the response to the Biscuit Fire, we spoke with federal officials with NIFC, the Forest Service, the Bureau of Land Management, and the National Park Service; officials with the National Association of State Foresters; and state and local officials in Oregon and California, including officials from ODF, CDF, and the California Office of Emergency Services. We conducted our work from April 2003 through February 2004 in accordance with generally accepted government auditing standards. Andrea W. Brown, John Delicath, Cliff Fowler (retired), Janet Frisch, Molly Laster, Paul E. Staley, Stanley G. Stenersen, Amy Webbink, and Arvin Wu made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
In 2002, the United States experienced one of the worst wildland fire seasons in the past 50 years--almost 7 million acres burned. These fires included the largest and costliest fire in Oregon in the past century--the Biscuit Fire. Following a lightning storm, five fires were discovered in the Siskiyou National Forest over a 3- day period beginning July 13. These fires eventually burned together to form the Biscuit Fire, which burned nearly 500,000 acres in southern Oregon and Northern California and cost over $150 million to extinguish. GAO evaluated (1) whether policies and procedures were in place for acquiring needed firefighting resources during the initial days of the Biscuit Fire, and the extent to which these policies and procedures were followed when the fire was first identified; (2) what resource management issues, if any, affected the ability of personnel to fight the fire; and (3) what differences, if any, existed in key certification standards for personnel among federal and state agencies and whether these differences affected efforts to respond to the fire. In commenting on a draft of this report, the Forest Service stated that the report appears to be accurate and the agency generally agrees with its contents. The Department of the Interior did not provide comments. National policies and procedures were in place and provided the framework to guide personnel in the local interagency dispatch center in Grants Pass, Oregon, who were responsible for acquiring resources to fight the Biscuit Fire. These policies and procedures provide for a multilevel dispatching system where, if sufficient firefighting personnel and equipment are not available locally, resource requests can be elevated to other dispatch centers at the regional and, if necessary, national level. To facilitate the swift suppression of new fires, local dispatch center personnel can contact neighboring centers directly, including those in adjacent regions, before elevating resource requests. When the first two fires were found on July 13, the Grants Pass dispatch center did not have sufficient firefighting resources available locally. Grants Pass personnel requested resources from the responsible regional center in Portland, as well as from a dispatch center in central Oregon, but no resources were immediately available in the region due to other higher priority fires that were threatening lives and property. Grants Pass personnel did not request resources from a neighboring interagency dispatch center in Fortuna, California, located in an adjoining dispatch region, because they believed the center had no available resources due to fire activity there. State officials working at the Fortuna dispatch center later said that a Fortuna-based helicopter fighting fires in Northern California near the first of the five Biscuit fires could have been made available to suppress this fire. However, Forest Service officials working with Fortuna personnel disagreed, saying that the helicopter had been needed to fight fires in California. Because no request was made, there was no discussion on that first day about whether the Biscuit Fire would have been the best use of the helicopter, and it is unclear, in any case, what the outcome of such a request would have been. Following the initial days of the Biscuit Fire, delays in obtaining needed personnel hampered efforts to fight the rapidly growing fire. Specifically, officials faced problems obtaining (1) highly experienced management teams to direct suppression strategies and crews to carry the strategies out, (2) supervisors to manage crews and equipment, and (3) support staff to monitor the training and experience of contracted crews. An unusually severe fire season, with many other higher priority fires, affected the availability of personnel needed to fight the Biscuit Fire. Finally, while some differences exist in certification standards for personnel between federal and state agencies responsible for fighting wildland fires, these differences did not appear to affect efforts to respond to the Biscuit Fire.
DI and SSI provide cash benefits to people with long-term disabilities. While the definition of disability and the process for determining disability are the same for both programs, the programs were initially designed to serve different populations. The DI program, enacted in 1954, provides monthly cash benefits to disabled workers—and their dependents or survivors—whose employment history qualifies them for disability insurance. These benefits are financed through payroll taxes paid by workers and their employers and by the self-employed. In fiscal year 2001, more than 6 million individuals received more than $59 billion in DI benefits. SSI, on the other hand, was enacted in 1972 as an income assistance program for aged, blind, or disabled individuals whose income and resources fall below a certain threshold. SSI payments are financed from general tax revenues, and SSI beneficiaries are usually poorer than DI beneficiaries. In 2001, more than 6 million individuals received almost $28 billion in SSI benefits. The process to obtain SSA disability benefits is complex and fragmented; multiple organizations are involved in determining whether a claimant is eligible for benefits. The current process consists of an initial decision and up to three levels of administrative appeals if the claimant is dissatisfied with SSA’s decision. Each level of appeal involves multistep procedures for evidence collection, review, and decision-making. Figure 1 shows the process, parts of which are required by law. The disability claims process begins when a claimant applies for disability benefits, generally at one of SSA’s 1,300 field offices across the country, where a claims representative determines whether the claimant meets financial and other program eligibility criteria; they also obtain information about the claimant’s impairments, including sources of medical and vocational information. If the claimant meets the financial and other program eligibility criteria, the claims representative forwards the claim to the federally funded but state-administered DDS in the state where the claimant lives. DDS staff obtain evidence about the claimant’s impairment, and a team consisting of a specially trained disability examiner and an agency medical consultant reviews the medical and vocational evidence and determines whether the claimant is disabled. The claimant is notified of the medical decision, and the claim is returned to the field office for payment processing or file retention. This completes the initial claims process. Claimants who are initially denied benefits can ask to have the DDS reconsider its initial denial. If the decision at this reconsideration level remains unfavorable, the claimant can request a hearing before a federal ALJ at an SSA hearings office, and, if still dissatisfied, the claimant can request a review by SSA’s Appeals Council. Upon exhausting these administrative remedies, the individual may file a complaint in federal district court. Given its complexity, the disability claims process can be confusing, frustrating, and lengthy for claimants. Many individuals who appeal SSA’s initial decision will wait a year or longer for a final decision on their benefit claims. The claims process can also result in inconsistent assessments of whether claimants are disabled; specifically, the DDS may deny a claim that is later allowed upon appeal. Over the years, as many as three-fourths of all claimants denied at the DDS reconsideration level filed an appeal and, of these, about two-thirds or more received favorable decisions at the hearings level. Program rules—such as claimants’ ability to submit additional evidence and to allege new impairments upon appeal—and the worsening of some claimants’ condition over time can explain some but not all of the overturned cases. In some cases, the inconsistency may be due to inaccurate decisions. SSA believes that DDSs generally make more errors on denials than on awards, while ALJs generally make more errors on awards than on denials. To address these concerns, SSA in 1994 set forth an ambitious plan to redesign the disability claims process. The overall purpose of the redesign was to ensure that decisions are made quickly, ensure that the disability claims process is efficient, award legitimate claims as early in the process as possible, ensure that the process is user friendly for claimants and those who provide employees with a satisfying work environment. The 1994 plan represented SSA’s first effort to significantly revise its procedures for deciding disability claims since the DI program began in the 1950’s. In April 1994, we testified that the redesign proposal was SSA’s first valid attempt to address major fundamental changes needed to realistically cope with the disability claims workload. We cautioned SSA, however, that many difficult implementation problems would need to be addressed. These included new staffing and training demands, development and installation of technology enhancements, and confrontation with entrenched cultural barriers to change. Since 1994, SSA has made several adjustments to its redesign plan, some of them in response to concerns we expressed over the years about SSA’s lack of progress. In 1996, we reported that SSA’s original 6-year plan was overly ambitious. At that time, SSA had made little progress toward meeting its goals, lacked demonstrable results, and faced difficulties obtaining and keeping the support of some stakeholders, including federal employees and state DDS managers and employees. SSA then issued a scaled-back redesign plan in 1997 focusing on testing and implementing eight key initiatives—each representing a major change to the system— within 9 years instead of the original 6 years. In 1999, we again reported that SSA had made little progress; despite being scaled back, the effort proved too large to keep on track. We recommended that the agency further focus its efforts on the most promising initiatives, including those that would improve the quality and consistency of its disability decisions and test promising concepts at only a few sites before moving to large- scale testing or implementation. SSA again revised its plans in 1999 and 2001. These plans reflect the agency’s commitment to (1) further test ways to streamline the claims process, (2) take additional steps to enhance the quality and consistency of decisions, and (3) introduce new initiatives that focus on the appeals process. This report focuses on five initiatives found in SSA’s latest revisions. During this same period, the Social Security Advisory Board also raised concerns about some of SSA’s proposed process changes and about the amount of time and resources the agency had invested in changes that resulted in minimal gains. More importantly, the Board raised concerns about certain systemic problems that can undermine the overall effectiveness of SSA’s claims process, which by extension can also undermine the effectiveness of SSA’s redesign efforts. The Board found that SSA’s fragmented disability administrative structure, created nearly 50 years ago, is ill-equipped to handle today’s workload. The Board focused on a number of areas, including the lack of clarity in SSA’s relationship with the states and the resulting variation among states in areas such as salary, hiring requirements, and the quality of decisions, and an outdated hearing process fraught with tension and poor communication between SSA and the ALJs. The Board recommended, among other things, that SSA (1) work to strengthen the current federal-state relationship in the near-term and revisit its overall relationship with the states, (2) assert its authority to require states to follow specific federal guidelines, (3) take steps to improve SSA’s relationship with its ALJs while also clarifying SSA’s authority to take steps to improve the timelines and consistency of ALJ disability decisions, (4) consider whether the agency should be represented at disability hearings (it currently is not), (5) consider closing the case record after the ALJ hearing, and (6) revisit the need for changes in the current provisions for judicial review by federal courts. Most of these changes are linked to significant structural reforms or the need to clarify management’s authority, and some may require legislative changes. The Board’s recommendations are different from the largely procedural or process changes that often typify SSA’s redesign efforts. SSA tested the Disability Claim Manager position in 36 locations in 15 states from November 1999 through November 2000. In June 2001, SSA ended the initiative. SSA concluded that the test results were not compelling enough to support implementing the disability claim manager position. While the test resulted in several benefits, such as improved customer and employee satisfaction and quicker claims processing, the increased costs of the initiative and other concerns convinced SSA not to proceed with the initiative. The Disability Claim Manager initiative was designed to make the claims process more user friendly and efficient by eliminating steps resulting from numerous employees handling discrete parts of the claim. It did so by having one person—the disability claim manager—serve as the primary point of contact for claimants until initial decisions were made on their claims. The managers were responsible for explaining the disability process and program requirements to the claimants and for processing both the medical and nonmedical aspects of their claims, responsibilities normally divided between SSA’s field office claims representatives and state DDS disability examiners. Both SSA and DDS employees served as disability claim managers during the test, and each manager performed both claims representative and disability examiner functions. In October 2001, SSA issued its final report evaluating the initiative. SSA found the results of the initiative to be mixed. On the positive side, SSA concluded that those SSA and DDS employees who participated in the test could master the expanded responsibilities required of the disability claim manager position, and the initiative appears to have met its goal of making the claims process more user friendly and efficient without compromising the accuracy of decisions. Specifically, SSA found that the initiative resulted in the following benefits: Greater customer satisfaction. Claimants served by disability claim managers reported greater satisfaction than claimants served under the traditional process. While customer satisfaction was comparable among awarded claimants—94 percent served by disability claim managers reported they were satisfied with SSA’s service, compared with 91 percent of those served under the traditional process—the difference in customer satisfaction was greater for denied claimants. More than two-thirds (68 percent) of denied claimants served by disability claim managers reported overall satisfaction with SSA’s service, compared with just over half (55 percent) of denied claimants served under the traditional process. Faster claims processing. Disability claim managers processed DI claims an average of 10 days faster and SSI claims an average of 6 days faster than similar claims processed under the traditional process. Comparable accuracy. The test showed that the accuracy of decisions made by disability claim managers was comparable to the accuracy of decisions made by others on similar claims. Improved employee satisfaction. Serving as a disability claim manager improved the job satisfaction of more than 80 percent of employees serving in that role. Employees cited several factors for their job satisfaction, namely, their increased control over the claim, their greater interaction with the claimant, their enhanced job knowledge, and their ability to provide better customer service. Federal employees also cited their increased pay as a factor in their increased job satisfaction. The Disability Claim Manager initiative provided additional benefits as well, such as improving understanding between SSA and DDS employees, according to SSA’s evaluation of the initiative. Training each organization’s staff in the others’ functions not only helped to identify training needs, but it also improved communication between the two organizations and increased their awareness of, and appreciation for, the other. SSA also assessed the initiative’s impact on the percentage of claimants awarded benefits, productivity, and costs. While the test results on award rates and productivity were inconclusive, the test results on costs showed that the Disability Claim Manager initiative substantially raised costs. Specifically, SSA found the initiative had the following results: Higher claims processing costs. SSA estimated that claims processing costs were 7 percent to 21 percent higher under the Disability Claim Manager initiative than under the traditional process. The costs for salaries and for obtaining medical evidence, including consultative examinations performed by DDS-paid physicians or psychologists, were higher under the Disability Claim Manager initiative than under the traditional process. Because of these higher costs, SSA concluded that claims processing costs would continue to be higher under the initiative even if productivity—the amount of claims processed per staff year—improved. Substantial start-up and maintenance costs. In addition to the higher claims processing costs, SSA experienced substantial start up costs to train SSA and DDS employees to function as disability claim managers and to develop an infrastructure to support the new claims process. SSA also determined that it would cost more to maintain the staff skills and the infrastructure required by the Disability Claim Manager initiative. SSA did not quantify the initiative’s start-up and extra maintenance costs. SSA’s evaluation concluded that the benefits of implementing the Disability Claim Manager initiative were not compelling enough to warrant its implementation. The primary consideration in reaching this conclusion was that the initiative would require major resource investments in higher operational costs, training, and infrastructure. But other factors also played a part. For example, SSA officials were concerned about the initiative’s effect on the long-standing relationship between SSA and the DDSs. Implementing the Disability Claim Manager initiative beyond the test would require legislation and regulatory changes to permit federal employees to determine medical eligibility and to permit state employees to determine nonmedical eligibility. The significant pay disparities between the federal and state employees performing the same functions as Disability Claim Managers also would need to be addressed. Because SSA employees who served as Disability Claim Managers received temporary promotions, they were generally paid at a higher rate than their DDS counterparts, only some of whom received promotions during the test. SSA officials were also concerned about the agency’s lack of progress in developing an automated disability claims process, which was expected to support the Disability Claim Manager initiative. According to SSA, such a system is still years away. The Prototype was implemented in October 1999 in DDSs in 10 states and will continue to operate in these states in its current form no later than June 2002. The participating DDSs process 25 percent of all initial disability claims. Preliminary results, which are based on DDS decisions, indicate that claimants receive benefits earlier from DDSs operating under the Prototype; DDSs operating under the Prototype award as many claimants at the initial level as other DDSs operating under the traditional process award at the initial and reconsideration levels combined, without compromising the overall accuracy of their decisions. In addition, because the Prototype eliminates the reconsideration step of the appeals process, appeals of claims denied under the Prototype reach hearing offices quicker than claims denied under the traditional process. However, according to SSA, many more denied claimants would appeal to ALJs under the Prototype than under the traditional process. More appeals would result in additional claimants waiting significantly longer for final agency decisions on their claims and would increase workload pressures on SSA hearings offices, which are already experiencing considerable case backlogs. It would also result in higher administrative costs under the Prototype than under the traditional process. More appeals would also result in more awards from ALJs and overall and higher benefit costs under the Prototype than under the traditional process. Because of this, SSA acknowledged in December 2001 that it would not extend the Prototype to additional states in its current form. During the next several months, SSA plans to re-examine the Prototype to determine what revisions are necessary to decrease overall processing time and reduce its impact on costs before proceeding further. The Prototype’s objective is to improve the disability claims process by ensuring that legitimate claims are awarded as early in the decision process as possible, thereby improving the fairness, consistency, and timeliness of SSA’s disability claims process. Toward that end, the Prototype initiative changes the way DDSs process disability claims, with the expectation that the changes would reduce the number of awards made at the ALJ level. The Prototype makes the following changes in the way DDSs determine disability. The Prototype: Grants greater decision-making authority to disability examiners. The disability examiner has the authority to decide when and how to use medical consultants’ expertise in some cases. The disability examiner is allowed to independently decide claimants’ eligibility for benefits without the medical consultant certifying the decision unless the law mandates otherwise. This change contrasts with the traditional process, in which the medical consultant signs off on all decisions. The new process is intended to maximize agency resources by focusing the attention of medical consultants on those claims for which their professional training and expertise is most needed. Requires enhanced documentation and explanation of decisions in the claims file. The disability examiner is required to develop evidence on claims more thoroughly and to better explain how the disability decision was made. This improvement is intended to enhance the quality of DDS decisions. This improvement also is intended to enhance the consistency between DDS and ALJ decisions by making the DDS explanation more useful to ALJs when claimants appeal DDS decisions to deny benefits. Adds a claimant conference. If the existing evidence in the claimant’s file would not support a fully favorable decision, the disability decision- maker is required to offer the claimant an opportunity to submit additional evidence and to have a personal interview with the decision- maker before a decision is made. Eliminates DDS reconsideration. The reconsideration step in the administrative appeal process is eliminated. This streamlines the disability claims process by allowing dissatisfied claimants the opportunity to appeal directly to an ALJ. Claimant conferences are not offered in cases where the claimant has moved and cannot be located, refuses to cooperate, or other similar situations. through March 2000; the sample of comparison group claims was selected from applications filed from December 1999 through February 2000. In July 2001, SSA issued an interim report describing preliminary results as of May 18, 2001. As of that date, initial DDS decisions had been completed on virtually all Prototype and comparison group claims; reconsideration decisions had been completed on 95 percent of comparison group claims for which reconsideration had been requested so far; and ALJ hearing decisions had been completed for less than half of the Prototype and comparison group claims appealed so far. More requests for reconsideration were still expected, as were more requests for hearings, especially for the comparison group. SSA cautions that the claims that have completed processing do not have the same characteristics as those that take longer to be processed; therefore, final results cannot be fairly projected. Also, because these results are preliminary, SSA has not yet completed its analysis to determine whether the differences between the Prototype DDSs and comparison group DDSs are statistically significant.Thus, it is too early to reach final conclusions about the impact of the Prototype. However, as shown in the following section, preliminary results are somewhat promising. Claims awarded earlier in the process. Under the Prototype, DDSs are awarding more claims earlier than under the traditional process. DDSs operating under the Prototype awarded benefits to 40.4 percent of initial claimants, while DDSs operating under the traditional process awarded benefits to 35.8 percent of initial claimants and to 39.8 percent of claimants at the initial and reconsideration levels combined. Thus, the Prototype awarded benefits to slightly more claimants in one step than the traditional process awarded in two. SSA estimates that under the Prototype, claimants received awards about 135 days sooner than claimants awarded benefits at reconsideration under the traditional process. Comparable accuracy. The accuracy of decisions made on initial claims by DDSs operating under the Prototype was comparable to the accuracy of decisions made by the comparison DDSs operating under the traditional process, despite the fact that only DDSs operating under the Prototype had to learn new procedures. While the accuracy rate on awarded claims was slightly lower in DDSs operating under the Prototype than in the comparison group of DDSs operating under the traditional process (96.6 percent vs. 97.1 percent), the accuracy rate on denied claims—on which DDSs have historically made more errors than on awards—was slightly higher under the Prototype (92.4 percent vs. 91.9 percent). The overall accuracy rate (awards and denials combined) was also slightly higher under the Prototype (94.1 percent vs. 93.8 percent). Initial claim decisions take longer; some final decisions may be quicker. As shown in table 1, overall it takes an average of 14 days longer for DDSs to process an initial claim decision under the Prototype (100 days vs. 86 days) than under the traditional process. Most of this increase appears due to the addition of the claimant conference under the Prototype, which is not part of the traditional process. This is evidenced by the fact that processing time for initial claims was about the same for awards under the traditional process and under the Prototype when no claimant conference was held (79 days vs. 80 days). Adding the claimant conference to the initial DDS decision process affords claimants who would otherwise be denied benefits an opportunity to present additional evidence and to have a personal interview with the decision-maker before a decision is made on their initial claims. The information presented during the conference can convince the DDS to award benefits or to reaffirm the denial. Moreover, the conference can help to improve the quality and quantity of evidence contained in the file, which can be useful if the case is appealed to an ALJ. Table 1 compares the number of days it takes DDSs to process initial claims under the Prototype vs. the traditional process. While initial claim decisions take longer under the Prototype, final decisions on appealed claims may take less time. Specifically, when the claimant conference results in a decision to deny benefits, eliminating reconsideration should enable claimants who appeal their denials under the Prototype to receive quicker decisions on their appeals than those claimants who appeal their denials under the traditional process. Even though it takes about 20 days longer to process initial decisions on denied claims under the Prototype (110 days vs. 90 days), eliminating the DDS reconsideration step of the appeals process results in appeals reaching ALJs about 70 days quicker than they would under the traditional process, according to SSA. When the claimant conference results in a decision to award benefits, claimants receive benefits sooner than they would have under the traditional process. As table 1 shows, when a claimant conference is held, DDSs operating under the Prototype take 55 days longer than comparison DDSs operating under the traditional process to make initial award decisions (134 days minus 79 days). However, under the traditional process—with no claimant conference—these claimants would have been denied benefits; the earliest they could receive an award decision under the traditional process would be after reconsideration. Because the reconsideration decision would take about 135 days, according to SSA, the claimant receives an award decision and his or her benefits about 80 days quicker under the Prototype (the 135 days saved by forgoing reconsideration minus the 55 days added for processing claims when a claimant conference is held). Under the Prototype, about 3 out of 100 claimant conferences result in awards, according to SSA. Despite these promising results, the Prototype’s impact on customer service and costs has become a major concern to SSA. Since the interim report was issued, more claims have been processed through the ALJ level, and these results have convinced SSA that both administrative and benefit costs would be substantially higher under the Prototype if the initiative were expanded to other states in its current form. Although the rate of awards at the ALJ level is lower under the Prototype than under the traditional process, SSA estimates that about 100,000 more denied claimants would appeal to the ALJ level under the Prototype. Because of this, additional claimants would wait significantly longer for final agency decisions on their claims. This would further increase workload pressures on SSA hearings offices, which are already experiencing considerable case backlogs. The additional appeals are also expected to result in more awards from ALJs and overall under the Prototype than under the traditional process. SSA told us in December 2001 that the agency would not expand the Prototype to additional states in its current form. Instead, it published a notice in the Federal Register on December 28, 2001, extending the Prototype in the existing 10 states for no longer than 6 months. During the upcoming months, SSA will determine what revisions it can make to the Prototype to decrease overall processing time and to reduce its impact on costs before proceeding further. The Hearings Process Improvement initiative has been implemented and is currently operating in all 138 hearing offices. The initiative was implemented in hearing offices in phases, without a test, and was operational nationwide by November 2000. The initiative has not reduced the time required to process a claim; rather, processing has slowed considerably. In addition, the backlog of cases waiting to be processed has increased and is rapidly approaching crisis levels. The Hearings Process Improvement initiative was intended to improve customer service by reducing the time it takes to get a decision on an appealed claim. To reach this end, the initiative introduced changes designed to ensure efficient case processing. This was to be accomplished by increasing the level of analysis and screening done on a case before it is scheduled for a hearing with an ALJ. In addition, the initiative reorganized hearing office staff into small groups, called “processing groups,” to ensure better accountability and control in the handling of each claim. Finally, SSA was to launch automated functions that would facilitate the monitoring of cases through the hearings process. These changes were expected to reduce the time it takes to process cases. In addition, the changes were expected to improve employee job satisfaction and foster a cooperative work environment. SSA intended to split its 138 hearing offices into three groups to implement the initiative in one group at a time so that the required changes did not occur in all hearing offices simultaneously. Phase one included over one-quarter of all hearing offices; these offices fully implemented the initiative between January and April 2000. Phases two and three, comprising the remaining hearing offices, were scheduled to begin in October 2000 and January 2001, respectively. However, phase three was implemented early, in anticipation of expected workload increases, at the same time as phase two in October 2000. As a result, all hearing offices had implemented the initiative by November 2000. The results of the Hearings Process Improvement initiative have been disappointing for SSA. The initiative has not reduced the time it takes to approve or deny an appealed case. Rather, the initiative has added 18 days to the time required for a decision in an appealed claim. In September 2001, after the initiative was implemented, processing time in hearings offices was 336 days, up from 318 days in September 1999. As a result of this increase, the initiative failed to achieve its fiscal year 2001 processing time goal of 208 days. Processing time in phase one hearing offices is not better than phase two and three hearing offices. In addition, the number of appealed cases processed has decreased since the initiative’s implementation. In fiscal year 1999, 597,000 cases were decided; in fiscal year 2001, this number had decreased 22.1 percent to 465,228 cases. Fewer cases being decided has led to a growth in the backlog of cases pending a decision. Before the initiative was implemented, 311,958 cases were pending a decision in September 1999. Two years later, in September 2001, the number of appealed cases pending a decision had increased 39.7 percent to 435,904. During this time, the number of cases received by hearing offices had increased by only 5.7 percent. Therefore, increased workload could be, at most, only a small part of the explanation for the growth in backlog. The failure of the Hearings Process Improvement initiative is, in part, the result of attempts to implement large-scale changes too quickly without resolving known problems. Problems—process delays, poorly timed and insufficient staff training, and the absence of important automated functions—that surfaced during phase one of implementation were not resolved before additional phases were implemented. Instead, the pace of implementation was accelerated when phases two and three were implemented simultaneously. The Hearing Process Improvement initiative experienced the first problem, process delays, during phase one of implementation. The organization of case evidence (referred to as “case pulling”) slowed and as a result reduced the number of case files ready for ALJ review. A decrease in the number of case files for ALJs to review consequently reduced the number of cases that could be scheduled for a hearing and decided upon. This case-pulling backlog was due to changes in staff responsibilities and promotions that were a result of the initiative. These changes created a void of experienced staff to organize and prepare case files for ALJ review. Managers in hearing offices that implemented the initiative during phase one recommended to phase two and three hearing offices that they prepare extra cases for ALJs prior to implementing the initiative. Despite this feedback, SSA management did not ensure that extra cases were prepared for ALJs. Consequently, ALJs in phases two and three hearing offices also had too few cases prepared for their review when the initiative was implemented. A second problem, poorly timed and insufficient staff training, contributed to process delays. While over 2,000 individuals were trained for new responsibilities given to them as a result of the Hearings Process Improvement initiative, much of this training was poorly timed and was provided too early or too late. For example, some employees waited up to 5 months after the initiative was implemented to receive training. In addition, many employees indicated that the training was ineffective and did not prepare them for their new responsibilities, according to SSA’s Office of Workforce Analysis. These training-related problems were not resolved before implementation continued. Finally, problems encountered during the initiative’s implementation were exacerbated by the fact that the automated functions necessary to support initiative changes never materialized. Enhanced automated functions could have facilitated the tracking and monitoring of cases and the transfer of case-related data. However, these functions that would have facilitated faster processing of cases were not available as designed, although they had been included in the initiative’s plan. Again, SSA management failed to resolve this problem before continuing to implement the initiative. Hearing offices’ performance may also have been affected by a poor relationship between SSA and the ALJs. In January 2001, the Social Security Advisory Board recommended that SSA improve its relationship with the ALJs by changing its relationship from one of confrontation to cooperation. A poor relationship between SSA and the ALJs may have contributed to a lack of stakeholder support for the Hearings Process Improvement initiative. Among ALJs there was mixed support for the initiative. Many ALJs indicated that the ALJ union was organized in 1999 in response to the perception that SSA excluded them in the formation of the Hearings Process Improvement initiative. However, SSA officials disagreed with this assertion and said that ALJs were included during the formation of the initiative. Finally, the difficulties SSA is experiencing under the Hearings Process Improvement initiative may also have been made worse by a freeze on ALJ hiring. Since April 1999, this hiring freeze has prevented SSA from hiring new ALJs to replace those who have retired. However, the hiring freeze was temporarily lifted, thereby allowing SSA to hire 126 ALJs in September 2001. The freeze is still in effect and may impact hearing offices’ future performance. In an attempt to address its problems in implementing the Hearings Process Improvement initiative, SSA management in March 2001 allowed hearing offices to modify elements of the initiative in hopes of facilitating and speeding case processing. For example, instead of cases being handled exclusively within the smaller processing group, SSA allowed them to be handled by individuals outside of the group. This undercut the rationale behind the processing groups, which was to heighten accountability. In addition, with the intention of allowing more cases to reach ALJs, hearing offices were allowed to reduce the level of screening and analysis prescribed by the initiative before cases go to the ALJs. These modifications contradict some of the original objectives of the initiative. In addition, these modifications make it difficult to tell if the concepts in the initiative as designed can ever be effective because it has not been implemented as intended. SSA is currently evaluating the Hearing Process Improvement initiative to determine what lessons can be learned and what changes need to be made. Despite these modifications, case processing has slowed and contributed to the backlog. SSA’s current backlog is reminiscent of a crisis-level backlog in the mid 1990’s, which led to the introduction of 19 temporary initiatives designed to reduce OHA’s backlog of appealed cases. These temporary initiatives introduced new procedures and reallocated staff. Among the most long-standing of these initiatives was the Senior Attorney Program. Under this program, selected attorneys reviewed claims to identify those cases in which the evidence already in the case file supported a fully favorable decision. Senior Attorneys had the authority to approve these claims without ALJ involvement. The Senior Attorney Program took effect in fiscal year 1995 and was phased out in 2000. During its existence, the program succeeded in reducing the backlog of pending disability cases at the hearing level by issuing some 200,000 hearing-level decisions. However, findings on the accuracy of Senior Attorney decisions are mixed. One study concluded that the quality of decisions made by Senior Attorneys generally increased over the period of the initiative, though falling short of the quality of decisions made by the ALJs. A second study indicates that the quality of decisions made by Senior Attorneys is comparable to those made by the ALJs. SSA management has expressed concern that the Senior Attorney Program is a poor allocation of resources as it diverts attorneys from processing more difficult cases in order to process the easier cases. Finally, SSA faces several challenges that may exacerbate the current backlog problem. First, recent legislative changes may increase workloads, according to SSA officials. Certain Medicare coverage revisions may increase hearing office workloads by introducing a new type of case for ALJs to review. This new type of case requires ALJs to review determinations of whether or not a particular item or service will be covered by Medicare. SSA officials said that this new workload presents many challenges for OHA because ALJs will be reviewing policy instead of individual cases and conducting adversarial hearings. Originally expected to take effect in October 2001, review of this new type of case has been delayed until regulations are issued. SSA officials hope to isolate the impact of this new caseload to a separate hearing office unit. Second, future revisions to the Medicare appeals process may also increase hearing offices’ workload by broadening the circumstances under which Medicare cases can be appealed, as well as decreasing the amount of time OHA has to make a decision, according to SSA officials. These revisions to the Medicare appeals process will take effect October 2002. Finally, and perhaps most significantly, SSA is facing a workload increase as the baby boom generation reaches its disability prone years, making it all the more vital to resolve this backlog of appealed cases awaiting a decision. The Appeals Council Process Improvement initiative was implemented in fiscal year 2000. The initiative introduced new strategies for processing cases at the Appeals Council with the intent of improving customer service by reducing processing times and pending caseloads. SSA developed six new strategies by which to accomplish this, only two of which are permanent. The four temporary strategies included efforts to add staff resources from other units. However, the focus of the initiative is currently on the two permanent strategies. These two new strategies require staff members to screen for cases eligible for quick action and encourage staff members to discuss difficult cases with adjudicators before preparing more time-consuming written analyses. The Appeals Council Process Improvement initiative has reduced both the time required to process a case and the backlog of cases awaiting review. However, the results on both fall short of goals. Processing time in the Appeals Council was reduced from 458 days (fiscal year 1999) to 447 days (fiscal year 2001), still falling short of the fiscal year 2001 goal of 285 days. The backlog of cases awaiting review was reduced from 144,500 (fiscal year 1999) to 95,400 (fiscal year 2001) but falls short of the fiscal year 2001 goal of 51,100 cases. According to SSA officials, the impact of the initiative was limited by a number of factors. First, the initiative originally included the temporary addition of outside staff to help process cases. This additional support, however, did not fulfill expectations and has been discontinued. In addition, SSA officials indicated that the initiative’s impact was limited by automation problems and policy changes. For example, data storage and retrieval problems, as well as an inefficient and error-prone case tracking system, caused process delays. Also, recent policy changes modified how appealed cases are processed when the claimant has filed a subsequent application. According to SSA officials, these policy changes raise complicated adjudicative issues that require more time to resolve.However, SSA management has taken action to resolve these problems, which SSA officials believe should enhance future progress. SSA’s original plan to redesign the disability claims process issued in 1994 called for SSA to undertake a parallel effort to revamp its existing quality assurance system. Progress to date, however, has been limited to a contractor’s assessment of SSA’s existing quality assurance practices. This assessment was completed in March 2001. SSA subsequently established an executive work group to consider what action to take in response to the contractor report. Accurate disability decisions are an essential element of good public service, and SSA has in place several quality review systems to measure the accuracy of disability decisions made by DDSs and ALJs. At the same time, SSA has long recognized the limitations of its existing quality assurance processes and expressed the desire to improve these processes. In its several revisions to the 1994 redesign plan, SSA continued to voice the need to develop a more comprehensive quality assurance system focused on building in quality as disability decisions are made and improving quality reviews after decisions are made. In its latest disability management plan, issued in January 2001, SSA stated that its quality assurance system needed to more effectively promote uniform and consistent disability decisions across all geographic and adjudicative levels. We have also recognized that these systems are limited and need to be improved. Yet, SSA has made very little progress in developing such a system, at least in part due to considerable disagreement among internal and external stakeholders on how to accomplish this difficult objective. As a first step, SSA contracted with an independent consulting firm with expertise in designing and developing effective quality assurance systems to assess SSA’s quality assurance practices used in the disability claims process. In March 2001, the consulting firm issued its final report. The consulting firm’s report concluded that SSA could only achieve its quality objectives for the disability program by adopting a broad, modern view of quality management. While SSA’s existing quality assurance practices focus on identifying errors, the broader concept of quality management encompasses all of the efforts of an organization to produce quality products. The consulting firm outlined seven requirements of a “best-practice” quality management system and concluded that SSA’s existing system is “substantially deficient” in the extent to which it satisfies each of the requirements. A best practice quality management system for SSA’s disability claims process would develop a clear operational definition of quality with multiple dimensions, such as accuracy, timeliness, efficiency, customer service, and due process; develop and support performance measures that are closely tied to the definition of quality; support a quality focused culture—that is, employees and management rather than just the designated quality department must be responsible for quality. Managers in every component must champion the common quality objective; provide information that can be used to improve the disability decision-making process and disability policy; provide employees with the resources to produce quality outcomes and service and value employees for their contribution to success; ensure that the disability programs are national programs. This should include a measurement system that can identify variation and a systematic effort to address variation when it is identified; support statutory and regulatory requirements. This goes beyond measuring performance as required by statute to providing information that can address congressional concerns, assist in the analysis of proposed legislation, and support the monitoring and evaluation of its implementation. SSA agreed that it is appropriate and necessary for the agency to go forward toward transforming the existing quality assurance program into a broader quality management model. The agency established an executive work group to decide a future course of action. Since 1994, SSA has introduced a wide range of initiatives in an effort to redesign its disability claims process. In spite of the significant resources SSA has dedicated to improving the disability claims process, the overall results—including the results from the five initiatives that are the subject of this report—have been disappointing. We recognize that implementing sweeping changes such as those envisioned by these initiatives can be difficult to accomplish successfully, given the challenge of overcoming an organization’s natural resistance to change. But the factors that led SSA to attempt the redesign—increasing disability workloads in the face of resource constraints—continue to exist today and will likely worsen when SSA experiences a surge in applications as more baby boomers reach their disability-prone years. Today, SSA management faces crucial decisions on how to proceed on a number of these initiatives. We agree that SSA should not implement the Disability Claim Manager at this time, given its high costs and the other practical barriers to implementation at this time. We also agree that the Appeals Council Process Improvement initiative should continue, but with increased management focus and commitment to achieve the initiative’s performance goals. Deciding the future course of action on each of the remaining three initiatives presents a challenge to SSA. For example, in the next several months, SSA will face a decision on how to proceed with the Prototype initiative. Preliminary results indicate that this initiative has the potential to achieve its objective of significantly reducing the time it takes for claimants to receive final decisions from SSA on their claims— first, by awarding more legitimate claims at the initial DDS level and second, by moving denied claims to the ALJ quicker. However, if the Prototype is expanded nationwide in its current form, both benefit and administrative costs will increase. SSA faces the challenge of finding a way to retain the Prototype’s most positive elements while also reducing its impact on costs. We are most concerned about the failure of the Hearings Process Improvement initiative to achieve its goals. Hearing office backlogs are fast approaching the crisis levels of the mid-1990’s. At that time, SSA took a series of actions that, at least in the short term, reduced the backlog. However, SSA has yet to take actions to successfully address the current problem on either a short-term or long-term basis. As a result, the problem will likely worsen. We also are concerned about SSA’s lack of progress in developing a comprehensive quality assurance system. SSA’s progress has been slow, despite the agency’s long-term recognition that such a system is needed. Without such a system, it is difficult for SSA to ensure the integrity of SSA’s disability claims process. Finally, given the limited overall success that SSA has experienced in implementing initiatives to improve its disability claims process over the last 7 years, it may be time for the agency to step back and reassess the scope of its basic approach. SSA’s past and current focus on changing the steps and procedures of the process and adjusting the duties of its decision-makers has not been effective to date. A new analysis of the fundamental issues impeding progress may help SSA identify areas for future action. Such an analysis might include careful consideration of the areas previously identified by the Social Security Advisory Board, such as the fragmentation and structural problems in SSA’s overall disability service delivery system. To best ensure that SSA’s disability decision-making process initiatives improve customer service by providing more timely and accurate processing of claims, we recommend that SSA take the following actions: Implement short-term strategies to immediately reduce the backlog of appealed cases in the Office of Hearings and Appeals. These strategies could be based on those that were successfully employed to address similar problems in the mid-1990’s. Develop a long-range strategy for a more permanent solution to the backlog and efficiency problems at the Office of Hearings and Appeals. This strategy should include lessons learned from the Hearings Process Improvement initiative, the use of limited pilot tests before implementing additional changes nationwide, and consideration of some of the fundamental, structural problems as identified by the Social Security Advisory Board. Develop an action plan for implementing a more comprehensive and sophisticated Quality Assurance Program. This plan should include among other things implementation milestones and estimated resource needs. SSA agreed with our report’s observations and recommendations. The agency commented that our recommendations support programmatic changes under discussion and provide SSA with the necessary latitude to implement them. With regard to specific recommendations, SSA agreed that it is critical for SSA to reduce the backlogs at OHA and stated that it plans to examine its past experiences with prior initiatives and activities to help develop both short-term and long-term strategies to address the problem. A major focus of its long-term strategy will be to redirect significant resources, within budget limitations, to developing and enhancing technology to support the disability case process at OHA and the Appeals Council. While we agree with SSA’s efforts to improve its technological support of the disability case process, we believe that technology improvements alone will not sufficiently address the problems at OHA. The agency will also need to focus on addressing the more fundamental management issues and structural problems that contributed to the backlog of appeals at OHA and the Appeals Council. SSA also agreed with our recommendation that it should develop an action plan for implementing a more comprehensive and sophisticated Quality Assurance Program. The Commissioner charged the executive workgroup with defining the components of quality performance and developing specific pilots that would test several of the Quality Assurance redesign options being considered. SSA stated that action plans, implementation milestones, and resource needs for these pilots are currently being drafted. In addition to its comments on our recommendations, SSA also made technical comments on our draft report, which we have incorporated when appropriate. One particular technical comment made by SSA that we did not incorporate warrants explanation. We compare the results on the accuracy of decisions made under the Prototype with those made by the comparison group operating under the traditional process. SSA suggested that we also compare performance over time--that is, before and after implementation. While adding this comparison would slightly alter the relative difference between the Prototype and comparison groups of DDSs, the end result as described in our report remains the same. Prototype DDSs performed better overall and on denied claims but less well on awards. We are sending copies of this report to the Commissioner of the Social Security Administration and other interested parties. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact me on (202) 512-7215 or Kay Brown at (202) 512-3674. Key contributors to this report were Ellen Habenicht, Angela Miles, and Corinna Nicolaou.
The number of people applying for benefits from the Social Security Administration's (SSA) two disability programs grew dramatically during the 1990s. As a result, the Disability Insurance and Supplemental Security Programs began to experience huge backlogs of undecided claims. SSA has spent $39 million during the past seven years on various initiatives to help it better manage its caseloads and ensure high-quality service. SSA spent another $71 million to develop an automated disability claims process. This report reviews the status and outcomes of five initiatives intended to improve SSA's disability claims process. GAO found that the results of the initiatives have been disappointing.
In October 1990, the Federal Accounting Standards Advisory Board (FASAB) was established by the Secretary of the Treasury, the Director of the Office of Management and Budget (OMB), and the Comptroller General of the United States to consider and recommend accounting standards to address the financial and budgetary information needs of the Congress, executives agencies, and other users of federal financial information. Using a due process and consensus building approach, the nine-member Board, which has since its formation included a member from DOD, recommends accounting standards for the federal government. Once FASAB recommends accounting standards, the Secretary of the Treasury, the Director of OMB, and the Comptroller General decide whether to adopt the recommended standards. If they are adopted, the standards are published as Statements of Federal Financial Accounting Standards (SFFAS) by OMB and by GAO. In addition, the Federal Financial Management Improvement Act of 1996, as well as the Federal Managers’ Financial Integrity Act, requires federal agencies to implement and maintain financial management systems that will permit the preparation of financial statements that substantially comply with applicable federal accounting standards. Issued in December 1995 and effective beginning with fiscal year 1997, SFFAS No. 5, Accounting for Liabilities of the Federal Government, requires the recognition of a liability for any probable and measurable future outflow of resources arising from past transactions. The statement defines probable as that which is likely to occur based on current facts and circumstances. It also states that a future outflow is measurable if it can be reasonably estimated. The statement recognizes that this estimate may not be precise and in such cases, it provides for recording the lowest estimate and disclosing in the financial statements the full range of estimated outflows that are likely to occur. SFFAS No. 6, Accounting for Property, Plant, and Equipment, which is effective beginning in fiscal year 1998, deals with various accounting issues pertaining to PP&E. This statement establishes several new accounting categories of PP&E, collectively called stewardship PP&E. Other PP&E is referred to as general PP&E. One of the new stewardship categories—federal mission PP&E—is defined as tangible items owned by a federal government entity, principally DOD, that have no expected nongovernmental use, are held for use in the event of emergency, war, or natural disaster, and have an unpredictable useful life. Federal mission PP&E, which includes ships, submarines, aircraft, and combat vehicles, is a major part of DOD’s total PP&E. SFFAS No. 6 also provides information on how SFFAS No. 5’s standard on liabilities should be applied to PP&E. Specifically, SFFAS No. 6 discusses how to recognize the liability for the clean up of hazardous waste in PP&E. While this statement modifies SFFAS No. 5 with respect to the timing of liability recognition for general PP&E, it has no effect on accounting for liabilities related to aircraft and other federal mission PP&E. We undertook this review to assist DOD in its efforts to meet the new federal accounting standard, SFFAS No. 5, and because of our responsibility to audit the federal government’s consolidated financial statements beginning with fiscal year 1997. Our objectives were to determine (1) the status of DOD’s efforts to implement the new federal accounting standard for disclosure of liabilities, such as aircraft disposal, and (2) whether an estimate of the minimum disposal liability for aircraft, including the removal and disposal of hazardous materials, could be made. The following was done to accomplish our objectives. To assess the status of DOD’s efforts to implement SFFAS No. 5, we reviewed DOD regulations and interviewed officials from the DOD Comptroller’s office. To gain an understanding of the procedures and the financial and logistical management information systems that can be used to accumulate and report on aircraft disposal costs, we (1) examined the management and financial reporting for these programs used by the services, (2) reviewed applicable DOD and service instructions and regulations, and (3) interviewed DOD, Air Force, Army, and Navy officials. To determine if the liability is reasonably estimable, we identified the financial and logistical management information systems and reporting mechanisms in place that contain information about the costs of aircraft disposal, including demilitarization and hazardous material disposal processes. We visited DOD’s designated aircraft storage, reclamation, and disposal facility at the Aerospace Maintenance and Regeneration Center (AMARC) where data were readily available for addressing removal and disposal costs of older, out of service aircraft systems. To determine if the liability could be estimated for newer aircraft, we selected five aircraft for review. The five aircraft selected were the Air Force’s F-16 and B-1B, the Navy’s F-14 and F-18, and the Army’s AH-64 Apache Helicopter. We chose these five aircraft because they represent the primary fighter or attack and bomber aircraft for each of the services, have the largest number in their class, and represent about 17 percent of the services’ combined active and inactive inventory. Because environmental costs are more variable and are likely to raise more complex estimation issues, we performed a more in-depth analysis of these costs. Using data initially obtained at AMARC, information in the DOD hazardous material disposal manual, and visits to maintenance depots, we prepared a list of hazardous materials associated with each of these aircraft. On a case-by-case basis, we then obtained depot level officials’ concurrence that these items represent the primary hazardous material on each of these aircraft. To compute the cost of removing hazardous materials from each aircraft, we reviewed documents that stated a standard or estimated removal time for each of the hazardous material items from the depot responsible for program depot maintenance on the applicable aircraft and AMARC’s hourly labor rate. We did not independently verify the data obtained from the inventory and financial systems or the reported removal times. We interviewed the services’ environmental engineers to determine which hazardous materials require disposal. To determine the costs of disposing of these materials, we reviewed disposal and shipping records at AMARC and the various depots. For those materials that were not scheduled for disposal, we interviewed various depot personnel to determine their methods for reusing and recycling them. We also discussed disposal procedures with various offices of the Defense Reutilization and Marketing Service. During our review, we contacted personnel and/or conducted work at various locations including the Army Aviation and Troop Command Headquarters, St. Louis, Missouri; Aerospace Maintenance and Regeneration Center, Davis-Monthan Air Force Base, Arizona; Air Logistics Centers at Hill Air Force Base, Utah, and Tinker Air Force Base, Oklahoma; Office of the Chief of Naval Operations; Naval Aviation Depots at Jacksonville, Florida and North Island, California; Corpus Christi Army Depot, Corpus Christi, Texas; offices of the Defense Reutilization and Marketing Service, Battle Creek, Michigan; and applicable headquarters offices in the Washington, D.C., area. We conducted our review from July 1996 through June 1997 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Department of Defense for review and comment. We received oral comments which have been incorporated as appropriate. Although SFFAS No. 5 is effective beginning with fiscal year 1997, as of the end of the fiscal year on September 30, 1997, DOD had not established a policy to implement this federal accounting standard. On September 30, 1997, the DOD Comptroller’s office posted revisions to the electronic version of DOD’s Financial Management Regulation to include SFFAS Nos. 1 through 4, but SFFAS No. 5 was not included. In addition, the DOD Comptroller, who is responsible for developing and issuing guidance on accounting standards, and the Under Secretary of Defense (Acquisition and Technology), who is responsible for the operational activities associated with aircraft disposal, have not provided implementation guidance to the services to assist them in estimating the disposal costs for aircraft. Service officials stated that they are reluctant to estimate a liability for their aircraft until they receive DOD-wide guidance. Unless prompt action to implement this standard is taken, it is unlikely that DOD’s fiscal year 1997 financial statements will include an estimate of aircraft disposal costs as required. One of the key criteria cited in SFFAS No. 5 for a liability to be reported is that a future cost is probable—that is, the future outflow of resources is likely to occur. While the likelihood of a future outflow may be difficult to determine and an entity may have difficulty deciding whether to record a liability for certain events, DOD continually disposes of aircraft and has an amount for disposal costs in its annual budget. Thus, because it is known at the time of acquisition that costs will be incurred for the disposal of aircraft, the probability criterion for recording a liability is met. The Congress has also recognized that disposal costs will be incurred and has emphasized the importance of accumulating and considering this information. For example, the National Defense Authorization Act for Fiscal Year 1995 requires the Secretary of Defense to determine, as early in the acquisition process as feasible, the life-cycle costs for major defense acquisitions, including the materials to be used and methods of disposal. The life-cycle cost estimates are required before proceeding with the major acquisition. All aircraft are eventually disposed of using the same basic processes. Any estimate of the disposal liability must take into account these processes and use them as the basis for determining costs. The disposal process starts with the decision to remove an aircraft from service, referred to as retirement (Army), decommissioning (Air Force), and striking (Navy) of military aircraft. Aircraft disposal consideration begins when the services prepare an updated force structure plan. The plan shows the projected requirements for each type of aircraft and includes new procurement and various attrition factors including crashes, programmed retirements, airframe stress tests, parts reclamation needs, and foreign military sales. Active aircraft not needed to meet the services’ current and forecasted requirements are sent to AMARC, DOD’s designated storage and disposal facility for aircraft for temporary or long-term storage and eventual disposal. Aircraft arriving at AMARC are either placed in a flyable or temporary hold status, prepared for foreign military sales, salvaged for parts, or placed into long-term storage awaiting either eventual disposal or reuse determination. AMARC officials stated that, in general, planes that undergo the storage process are not recalled and are ultimately disposed of through sales or salvage. Once the military services have determined no further need exists for the aircraft, they are released for disposal. These aircraft and related parts are subjected to demilitarization processes to prevent further military use before they are transferred to the Defense Reutilization and Marketing Service (DRMS) for sale as scrap.Demilitarization may take place at the air base, at AMARC, or at the local DRMS field office. Part of the demilitarization process involves removing all remaining hazardous materials from the aircraft. Aircraft acquired by the services are, in general, considered mission assets. The Air Force’s Reliability and Maintainability Information System (REMIS), the Navy’s Aircraft Inventory Reporting System (AIRS), the Army’s Continuing Balance System-Expanded (CBS-X), and AMARC’s Aircraft Status Directory identify the number of active and inactive aircraft and are used by the services to keep track of their aircraft inventories. As shown in table 1, DOD reported about 18,000 active aircraft as of September 30, 1996, the most recent data available. The aircraft inventory serves as the basis for estimating the disposal liability although factors such as foreign military sales would have to be considered in adjusting the number of aircraft. According to a March 1997 AMARC report, about 4 percent of AMARC’s inventory at any given time is scheduled for foreign military sales. Aircraft lost during operations, however, are generally replaced to maintain the inventory at certain levels. As a result, operational losses may not reduce the total liability for aircraft disposal. The second key criterion in SFFAS No. 5 for reporting of a liability is that an amount be reasonably estimable. Information is available to develop cost estimates for each of the major aircraft disposal processes described in the previous section—demilitarization, storage maintenance, and hazardous materials removal and disposal. These processes account for most of the aircraft disposal costs. Our review focused on five aircraft (the Air Force’s F-16 and B-1B, the Navy’s F-14 and F-18, and the Army’s Apache Helicopter). Although data were available for each of the disposal processes, we performed a more detailed analysis of the costs associated with the removal and disposal of hazardous materials because these costs are more variable and likely to present more complex estimation issues. The information in the following sections indicates the types and sources of information available for DOD to develop an aircraft disposal cost estimate. As stated in SFFAS No. 5, this process may result in a range of potential aggregate costs, the lowest of which should be recorded unless an amount within the range which is most likely to occur is estimable. Demilitarization includes removing weapons and other designated items from the aircraft and then taking the aircraft off line. Other demilitarization actions include removing equipment that has, directly or indirectly, a significant military utility or capacity, such as sensitive radar equipment. A salvage or residual value for the aircraft was deducted from the demilitarization costs, since historically the remains of aircraft are sold as scrap at the time of disposal. As shown in table 2, demilitarization costs varied considerably for the three aircraft in our review for which this information was readily available from program offices. Although the B-1B and the Apache Helicopter are newer aircraft for which demilitarization plans and costs have not yet been developed, disposal cost estimates could be based on cost experience for other aircraft with similar missions. AMARC officials stated that disposal tasks are generally similar among aircraft although the quantity and complexity of specific items may differ. For new weapons systems, including aircraft, the disposal costs, including demilitarization costs, are to be developed as part of the life-cycle costs required by the National Defense Authorization Act for Fiscal Year 1995. Jacksonville Naval Depot officials stated that the demilitarization cost for the F-14 was significantly more than the other two aircraft because of the complexity of the disposal work effort and the related costs. Aircraft are stored at AMARC’s long-term storage facility. All openings, cracks, and joints have to be sealed and delicate surfaces protected from the hot sun, wind, and sand. The preservation process is repeated every 4 to 5 years to ensure that each aircraft is adequately protected. According to AMARC’s costing system, the maintenance costs of aircraft in long-term storage are about $400 per aircraft per year. According to an AMARC official, aircraft, on average, are kept in long-term storage for 20 years. They also stated that, in general, planes that undergo the storage process are not recalled and are ultimately disposed of through sales or salvage. Such storage costs could result in a significant liability. For example, if the current active inventory of 18,000 aircraft were all maintained in storage for the average of 20 years and AMARC’s estimated maintenance cost of $400 per aircraft per year were used, the storage costs would be at least $140 million. All five aircraft types we reviewed contained hazardous materials that must be removed, and if necessary, disposed of when the aircraft are taken out of service. Some hazardous materials can be recycled and reused multiple times, but the materials may ultimately have to be disposed of appropriately. For the five aircraft, sufficient information was available in DOD’s and the services’ financial and management information systems to estimate a cost for the removal of hazardous materials contained in these aircraft. Costs associated with disposal of these materials are currently insignificant, but will need to be considered based on assumptions of final disposal methods. There are numerous sources available to DOD for identifying which materials used in aircraft are considered hazardous and have to be cleaned up before aircraft disposal. DOD Manual 4160.21-M, known as the Property Disposal Manual and 40 Code of Federal Regulations (C.F.R.) 261 identify which materials are considered hazardous. Environmental managers at the services’ program offices and at the depots responsible for the aircraft, as well as maintenance personnel, are knowledgeable about the hazardous materials unique to specific aircraft. In addition, environmental managers at various Defense Reutilization and Marketing Offices (DRMOs) are familiar with the hazardous materials on aircraft. The aircraft we reviewed contain various hazardous materials, as shown in table 3. See appendix I for definitions of these materials. Some hazardous materials in aircraft are not shown in the above table. For example, there are many items on the aircraft, such as cadmium-plated bolts and other small items, that are too numerous to separately remove and account for during the disposal process. Because the cadmium plated items are sold for scrap and the specific items and quantities are not separately identified, they were not included in the sample aircraft analysis. However, DOD and the services would have to consider the significance of such items in the aggregate on a servicewide or DOD-wide basis. Information on the removal costs of hazardous materials in older aircraft is generally available at AMARC and is based on its experience in aircraft disposal. However, AMARC officials said they did not yet have significant experience in dismantling and disposing of the five aircraft in our review. Therefore, the officials suggested that a reasonable estimation approach would be to use removal times that are reported by the cognizant depots that perform maintenance on these systems. The removal times for each hazardous material can then be multiplied by AMARC’s hourly labor rate. Using this estimation method, table 4 shows the estimated cost of removing hazardous materials from the five aircraft. The wide variance in hazardous material removal costs can be accounted for by differences in the size and complexity of the five aircraft. For example, the B-1B weighs about 190,000 pounds compared to the smaller F-16 which weighs about 18,000 pounds. Moreover, the B-1B has over 1,000 items associated with pyrotechnics that cost an estimated $92,000 to remove. The cost to remove pyrotechnics from the F-16 is only about $1,700 per aircraft. Similarly, it takes a significant number of staff hours, estimated at about $11,000, to remove the fuel from the B-1B tanks and fuel lines and to take the protective measures for the fuel system. For the F-16, the same procedures take just a few hours at an estimated cost of about $200. The fuel is removed from the aircraft because it can be reused. The Apache Helicopter hazardous material removal cost estimate is much less than for the other aircraft because it contains considerably less hazardous material. For example, it costs an estimated $68 per aircraft to remove pyrotechnics from the Apache compared to about $1,700 to remove pyrotechnics from an F-16. The F-16 cost estimate includes removing one or two ejection seats and canopies and related detonating cord devices, compared to removing only emergency escape explosive bolts and related material for the Apache’s crew doors. For some mission assets, such as nuclear submarines, the actual hazardous material disposal costs are significant. However, unlike the removal costs, hazardous material disposal costs for the five aircraft in our review appear not to be significant because these materials are often reutilized, recycled, consumed (as is the case for fuel), or sold. Also, DOD does not track disposal costs by specific aircraft system since hazardous materials are disposed of in bulk. For example, AMARC transfers its nonrecyclable fuel to Davis-Monthan Air Force Base, which then disposes of it along with the base’s waste fuel through its bulk disposal contract. For the first 6 months of fiscal year 1996, AMARC paid Davis-Monthan less than $54,000 to dispose of all of its hazardous material. However, although recycling and reuse accounts for much of the hazardous material disposal costs, currently the possibility that reuse or recycling needs and capacity will change in the future must be considered in estimating the ultimate disposal costs for hazardous materials. DOD officials have pointed out that the total disposal cost estimate for aircraft will result in a significant liability—much of which would not require outlays in the current year. Thus, one way to provide a proper context for this reported liability and make it more meaningful to decisionmakers would be to, in a footnote to the financial statements, provide a breakdown of the liability based on the approximate time periods the aircraft are expected to be taken out of service. Table 5 is a simplified illustration of how the aircraft disposal liability could be reported by time period. For the purposes of this illustration, the following assumptions were used: (1) all aircraft had the same disposal costs, (2) 50 percent of the aircraft were currently awaiting disposal and the remaining aircraft were to be disposed of over the next 10 years, and (3) the total estimated liability was $500. This information could provide an important context for congressional and other budget decisionmakers on the total liability by showing the potential annual impact of the actions that have already occurred or are expected to occur during various budget periods including those outside the annually submitted Future Years Defense Program. Further, if the time periods used to present these data are consistent with budget justification documents, such as DOD’s Future Years Defense Program, this type of disclosure would provide a link between budgetary and accounting information, one of the key objectives of the CFO Act. As of September 30, 1997, DOD had not incorporated SFFAS No. 5 in its Financial Management Regulation. In addition, the DOD Comptroller and the Under Secretary of Defense (Acquisition and Technology) had not issued implementation guidance to the services to assist them in estimating aircraft disposal costs. Such costs are both probable and estimable and therefore meet the criteria stated in SFFAS No. 5 for reportable liabilities. DOD and the military services have information available to develop cost estimates on each of the major aircraft disposal processes. Development of the needed policy and implementing guidance is necessary to help ensure that an estimate of aircraft disposal costs is recorded in DOD’s fiscal year 1997 financial statements as required. Moreover, life-cycle cost estimates that include disposal costs will provide important information to the Congress and other decisionmakers on the true costs of aircraft as well as other weapon systems. We recommend that you ensure that the DOD Comptroller incorporate SFFAS No. 5 in DOD’s Financial Management Regulation, the DOD Comptroller and the Under Secretary of Defense (Acquisition and Technology) promptly issue joint implementing guidance for the services on the SFFAS No. 5 requirements for recognition of a liability for aircraft disposal costs, and the DOD Comptroller include the estimated aircraft disposal liability in DOD’s fiscal year 1997 financial statements. In commenting on a draft of this report, Department of Defense officials concurred with our recommendations that SFFAS No. 5 be incorporated in DOD’s Financial Management Regulation and that joint implementing guidance be issued promptly on the SFFAS No. 5 requirements for recognition of a liability for aircraft disposal costs. In addition, DOD stated that current disposal cost estimates can be reasonably determined for aircraft that have been in the active inventory for some time. However, DOD stated that it would be necessary to delay the reporting of the aircraft disposal liability until fiscal year 1998 because the development and coordination of procedures and reporting guidance would take time to complete. They also stated that the cleanup cost provisions in SFFAS No. 6 must be considered. SFFAS No. 5 was issued almost 2 years ago, to allow agencies ample time to develop implementing policies and procedures prior to its fiscal year 1997 effective date. As stated in the report, information is available on all of the major aircraft disposal processes to develop a reasonable estimate of these costs. Such an estimate need not be precise—SFFAS No. 5 permits the reporting of a range. Also, as noted in this report, the cleanup cost provisions of SFFAS No. 6 do not affect the reporting of the aircraft disposal liability. Accordingly, we believe that DOD, with a concentrated effort, can develop an estimate of aircraft disposal costs for its fiscal year 1997 financial statements. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations. You should submit your statement to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight within 60 days of the date of this report. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made over 60 days after the date of this report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the House and Senate Committees on Appropriations, the House and Senate Committees on the Budget, the Senate Committee on Armed Services, the House Committee on National Security, the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight and its Subcommittee on Government Management, Information, and Technology, and the Director of the Office of Management and Budget. We are also sending copies to the Acting Under Secretary of Defense (Comptroller), the Air Force Assistant Secretary for Financial Management and Comptroller, the Army Assistant Secretary for Financial Management and Comptroller, the Navy Assistant Secretary for Financial Management and Comptroller, the Under Secretary of Defense (Acquisition and Technology), the Deputy Under Secretary of Defense for Environmental Security, and the Acting Director, Defense Finance and Accounting Service. Copies will be made available to others upon request. Please contact me at (202) 512-9095 if you have any questions concerning this report. Major contributors to this report are listed in appendix II. Hazardous material - Any waste that, because of its quantity, concentration or toxicity, corrosiveness, mutagenicity or flammability, or physical, chemical, or infectious characteristics may (1) cause, or significantly contribute to, an increase in mortality or an increase in serious irreversible or incapacitating reversible illness or (2) pose a substantial present or potential hazard to human health or the environment when improperly treated, stored, transported, disposed of, or otherwise managed. Some of the hazardous materials contained in the aircraft we reviewed are discussed below. Batteries - Batteries consist of the following types: lead-acid, lithium-sulfur dioxide, magnesium, silver-bearing, mercury, and nickel cadmium. Unless batteries are to be recycled or reused, they must be turned in as hazardous material or hazardous waste. Composites - Carbon composite fiber material made of long carbon fibers mixed with bonding and hardening agents, such as epoxy resins. The health hazards associated with composite fibers appear to be similar to the effects of fiberglass, including inhalation of the fibers, which can cause bronchial irritation. Coolant - A fluid that circulates through a machine or over some of its parts in order to draw off heat. This includes chemical substances used in aircraft for cooling radar and related equipment. Certain forms of this material may be harmful if skin contact occurs. Fire suppressant - Substances used to keep materials on aircraft, such as fuel, from igniting and burning. Halon, one such suppressant, is an ozone-depleting substance that is reclaimed or recovered. Fuel cells - Fuel cells, which hold fuel, are not in themselves considered hazardous material, but because they are contaminated with fuel they can become hazardous. Aviation fuel contains benzene and toluene, both hazardous materials. Hydrazine - Supplemental liquid propellant, found only on the F-16, used to power an emergency power unit in the event of main engine failure. Hydrazine is an extremely dangerous material if inhaled and has to be specially handled during transfer by teams dressed in protective gear. Magnesium Thorium - Alloy of thorium and magnesium used to produce a strong, lightweight aircraft component. Thorium presents an internal and external radiation hazard. Petroleum, oil, and lubricants - Includes jet fuel, hydraulic fluid, antifreeze products, and other lubricants found on aircraft. In some states, these products are not considered hazardous. Pyrotechnics - Explosive devices used to jettison the canopy and activate the pilot’s ejection seat. On helicopters, these devices are used to shear off hinge pins on the fuselage doors to enable crew to extricate themselves in the event of a crash-landing. Dieter M. Kiefer, Assistant Director Marshall S. Picow, Auditor in Charge Gary L. Nelson, Auditor Darryl S. Meador, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed: (1) the status of the Department of Defense's (DOD) efforts to implement the new federal accounting standard for disclosure of liabilities, such as aircraft disposal; and (2) whether an estimate of the minimum disposal liability for aircraft, including the removal and disposal of hazardous materials, could be made. GAO noted that: (1) DOD has not implemented the federal accounting standard that requires recognizing and reporting liabilities such as those associated with aircraft disposal, nor has DOD provided guidance to the military services; (2) aircraft disposal is an ongoing process and the cost can be reasonably estimated; (3) accordingly, these activities meet the criteria for a reportable liability; (4) information on the three major disposal processes--demilitarization, storage and maintenance, and hazardous materials removal and disposal--is available to develop cost estimates; (5) Congress has recognized the importance of accumulating and considering disposal cost information; and (6) in the National Defense Authorization Act for Fiscal Year 1995, Congress required DOD to develop life-cycle environmental costs, including demilitarization and disposal costs, for major defense acquisition programs.
Artisanal and small-scale mining of gold in the DRC is a significant driver of the country’s economy. ASM gold mining in the DRC employs a large number of people, constitutes a potential major source of tax revenue, and represents a potential engine of development for the country, according to a 2015 study by USAID. More than 1,000 ASM gold mine sites of varying size operate in the DRC, primarily in remote provinces in the eastern region, commonly employing groups of 30 to 300 miners (see app. II for a map showing DRC provinces). In parts of the DRC, artisanal and small-scale mining provides alternative employment opportunities in the absence of a viable agricultural sector. In addition, mining—including ASM gold mining—provides miners with cash-on-hand and requires little or no specialized knowledge. ASM gold miners work as diggers, rock crushers, sorters, and traders. Most ASM gold miners use shovels, picks, and other rudimentary tools, such as mining pans. Because of the scarcity of mechanical equipment in some areas, ASM gold miners grind the mined gold manually into a powder, often with a pair of rocks or a tire rim, and extract gold from this powder using a sluice, mining pan, and washing pool and sometimes using hazardous chemicals such as mercury (see fig. 1). According to a 2016 study by the International Peace Information Service (IPIS), ASM gold mining sites in the eastern DRC annually produced a combined total of about 12 metric tons of gold, with an estimated value of $437 million, in 2013 through 2015. This study found that a miner produced an average of 0.17 grams per day, worth about $6.07 on global markets, but retained earnings of $1.84 to $2.75 per day. However, significant portions of many miners’ wages went toward paying off loans (i.e., prefinancing) that the miners incurred for food, tools, and other basic necessities. As a result, when production levels were low, miners could easily enter a cycle of indebtedness. According to the government of the DRC, the average annual official ASM gold production in 2010 through 2016 was about 279 kilograms (0.28 metric tons) of gold, with an estimated value of $10.1 million per year. Total official ASM gold production was about 1,955 kilograms (1.95 metric tons), with an estimated value of about $71 million in 2010 through 2016 (see table 1). The reported official supply chain for ASM gold produced in the DRC involves multiple key actors authorized by the DRC government, according to reports we reviewed and stakeholders we interviewed. However, these and other sources indicate that the vast majority of DRC- sourced ASM gold is mined, traded, and exported unofficially, without authorization. Additionally, the majority of ASM gold miners reportedly work in the presence of elements of the Congolese army or illegal armed actors, according to a report and stakeholders. The official supply chain for ASM gold produced in the DRC involves multiple actors, including miners, local traders, and exporters, according to USAID and UNGoE reports we reviewed and stakeholders we interviewed. Those sources indicated that these key actors are required to obtain government authorization, such as official mining cards, or register with the provincial or national government to trade or export ASM gold in the DRC. However, according to these reports and stakeholders, almost all DRC-sourced ASM gold is produced and traded unofficially and smuggled from the country. Figure 2 illustrates the reported official and unofficial supply chains for DRC-sourced ASM gold. According to these sources, unregistered local traders or exporters generally sell DRC-sourced ASM gold to regional buyers in Uganda, Burundi, or Rwanda or global buyers in Dubai, United Arab Emirates. The USAID and UNGoE reports we reviewed and stakeholders we interviewed generally described the official supply chain for DRC-sourced ASM gold as follows: Artisanal and small-scale miners purchase government mining cards and join cooperatives, which allows them to work in artisanal mining zones known as the zone d’exploitation artisanal (ZEA). According to a USAID report, ASM gold is considered legal under DRC law when it is produced by a registered cooperative working within a ZEA or a mining area that has been inspected and found to be “green”—that is, conflict free—within the past year by a government-accredited, validated mission. Miners in provinces where most of the ASM gold is produced are subject to an average provincial production tax of about 8 percent, according to USAID. Local traders, known as negociants, register with the government to purchase gold from villages near mine sites and sell it to larger traders or exporters. Generally, there are two types of local traders: petit negociants, who buy and sell small quantities (one-half gram or less), and grande negociants, who buy and sell larger quantities (1 to 50 kilograms) of gold. According to a USAID report, registered local traders in these provinces are subject to an average provincial tax of about 1 percent on sales volume. National exporters, known as comptoirs, register with the national government to export gold purchased from the local traders. Registered exporters are subject to a 2 percent national export tax, according to a USAID report. Global buyers, including international refiners, jewelers, and banks, buy ASM gold from registered exporters in the DRC for further processing for use in electronic components, jewelry, or gold bars. Despite the existence of an official supply chain, almost all ASM gold is smuggled out of the DRC and is therefore not reflected in official export statistics or subject to provincial or national taxes, according to reports we reviewed as well as stakeholders we interviewed. These sources indicate that smuggling activities often begin at the mine site and involve both registered and unregistered actors. For example, ASM gold miners mine for gold at undesignated sites and sell it to both registered and unregistered local traders. Unregistered traders purchase gold at or near mining areas and either sell the gold to larger registered or unregistered traders or exporters located in regional trading centers in the DRC—for example, in Bukavu, South Kivu Province, or Butembo, North Kivu Province—or smuggle the gold from the DRC themselves, according to USAID and UNGoE reports reflecting fieldwork completed in 2014 and 2016. Registered local traders also participate in smuggling by selling gold to unregistered exporters and to regional and global buyers outside the DRC. Additionally, registered exporters participate in smuggling by underreporting their exports of ASM gold to avoid taxes, according to reports reflecting fieldwork completed in 2014 through 2016. A UNGoE study estimated that unreported exports of ASM gold from the DRC and neighboring countries in the first 9 months of 2015 amounted to about $200 million. Some of the factors that contribute to smuggling include limited government control over the remote areas where ASM gold is primarily produced, inadequate infrastructure, and corruption, according to reports we reviewed and DRC government officials we interviewed. DRC government officials told us that smuggling is also a consequence of weak border enforcement. Reports we reviewed indicate that the smuggling of ASM gold from the DRC has resulted in a substantial loss of tax revenue. Most ASM gold produced in the DRC is smuggled through regional buyers in neighboring African countries and then to Dubai, UAE, according to several reports. UNGoE, OECD, and NGO officials told us that gold buyers located outside the DRC—for example, in Uganda— often rely on networks of traders who purchase gold at mines in the eastern DRC and smuggle it out of the country. These sometimes- complex supply chain networks include traders and exporters who are often from other countries in the region or from China, India, the Middle East, or Europe, according to an OECD representative. Reports we reviewed and stakeholders we interviewed also noted that ASM gold smuggled from the DRC is typically transported from Dubai to other international markets such as India or Switzerland. Since 2012, the Dubai Multi Commodities Centre, a UAE government entity, has provided guidelines on responsible sourcing to gold refiners through its accreditation program. However, joining the program is voluntary, and the entity’s jurisdiction does not include all refineries in the UAE, according to Dubai Multi Commodities Centre officials. In interviews, Dubai government officials and refiners and a representative of an auditing firm told us that accredited refiners generally do not purchase gold directly from the DRC or most adjoining countries (see app. II for a map showing the countries adjoining the DRC). Furthermore, these individuals noted that refiners take various actions to ensure that gold sourced from these countries does not enter their supply chains. However, the refiners also acknowledged having purchased gold in the local gold market in Dubai (known as the gold souk), despite the fact that, according to a UNGoE report, traders and jewelers operating in the souk may have purchased gold from the DRC. In interviews, traders and jewelers at the gold souk told us that they required minimal documentation and generally did not ask questions about country of origin when buying gold. For example, one trader told us he was willing to purchase up to 50 kilograms of gold without any source-of-origin documentation. As a result, the extent of comingling of gold from the souk and gold from refiners who follow responsible sourcing guidelines is unclear. In recent years, progress has been made in reducing the presence of armed groups at tantalum, tin, and tungsten mine sites, according to UNGoE, OECD, and IPIS reports. However, the widespread availability of gold in remote, difficult-to-access areas of the eastern DRC and the lack of a functioning traceability system allow armed groups to operate at gold mine sites with minimal government and international oversight. According to reports we reviewed and stakeholders we interviewed, interference by armed groups of state and nonstate actors occurs primarily at mine sites through, among other things, illegal taxation and control of mining areas, pillaging, and forced labor. For example, a 2016 IPIS study found that, as of 2015, an estimated 64 percent of ASM gold miners worked at mines with state and nonstate armed group interference. Armed groups have also been known to operate illegal road barriers, where they collect revenue from miners or traders transporting gold. Furthermore, according to the IPIS study and UNGoE officials, most instances of armed group interference at mining sites involve illegal taxation. As of 2016, among the conflict minerals, gold provided by far the most significant financial benefit to armed groups, according to UNGoE. Elements of the Armed Forces of the Democratic Republic of the Congo (FARDC) constitute the largest armed presence and source of interference at gold mine sites, according to the 2016 IPIS study. According to DRC government officials, although FARDC is present to maintain security at or around gold mine sites, some undisciplined FARDC elements have interfered at mining sites. However, these officials noted that the military is working to bring such elements under control by taking legal action against FARDC officers and soldiers found to be in violation of the law. In addition, a report by the Congo Research Group, based on fieldwork conducted in 2015, indicates that fragmentation has greatly increased among illegal nonstate armed groups in eastern DRC. With the disappearance or weakening of armed groups such as the March 23 Movement and the Democratic Forces for the Liberation of Rwanda from the DRC, illegal armed groups are now smaller and more fragmented, tending to pillage mines rather than impose permanent control, according to IPIS representatives. IPIS representatives noted that FARDC elements, in contrast, tend to impose more permanent control and illegal taxation. The DRC government and USAID, as well as several other entities, have undertaken initiatives to encourage the sourcing of conflict-free ASM gold from the DRC. However, some of these initiatives face challenges, such as the limited number of validated mine sites, as well as ongoing security risks. To mitigate supply chain–related concerns, in 2015, the DRC government developed the Traceability Initiative for Artisanal Gold (ITOA) to establish conflict-free sources of ASM gold. ITOA relies on a system of tamper- proof envelopes and agents to track and certify the source and chain of custody of gold. The DRC government seeks to implement ITOA at mining sites that have been validated as “green” (i.e., conflict free, with no child labor) and are located in officially designated ZEAs. According to USAID documents, the DRC government aims to pilot ITOA at two mine sites in the Maniema and South Kivu provinces and to scale the system on the basis of the pilot. DRC officials told us that they expect the ITOA pilot to be launched in the summer of 2017. However, the limited number of mine sites that have been validated and thus licensed to operate, as well as the relatively high provincial taxes in the mining sector in the DRC compared with taxes in neighboring countries, as reported by USAID and UNGoE, continue to limit incentives for sourcing conflict-free ASM gold. For example, as of April 2016, only 37 of more than 1,000 ASM gold mine sites had been validated as green, according to a DRC government accredited joint validated mission. The government’s ability to validate mines has been hindered by factors such as insecure conditions and lack of funding, according to a 2015 report by the Enough Project. Furthermore, relatively high government taxes discourage actors along the supply chain from selling gold through legal channels, according to reports we reviewed and stakeholders we interviewed. A USAID report found that regional tax rates in the DRC and neighboring countries had been largely equalized but that provincial taxes in the DRC remained high. For example, Support Service and Management of Small Scale Mining, or SAESSCAM—a provincial government entity—requires miners to pay a 10-percent production tax in addition to other fees, according to a USAID document. SAESSCAM is responsible for providing training in safe and effective mining techniques, among other things, but focuses primarily on collecting taxes from miners, according to USAID and OECD reports we reviewed. USAID is assisting the DRC government in its efforts to encourage the responsible sourcing of conflict-free ASM gold and provides training to mining cooperatives and government officials in the mining sector. Since 2015, USAID has partnered with the DRC government to assist with implementing ITOA, including helping to validate mine sites. In addition, since 2016, USAID has worked with Tetra Tech through the Capacity Building for Responsible Minerals Trade Program and Partnership Africa Canada (PAC) through the Just Gold project to scale up pilot initiatives for the production and sale of ASM gold. The pilot is focused on increasing the volume of conflict-free gold and improving the integrity of traceability systems in the DRC. For example, miners are taught more-sustainable exploitation techniques and offered project equipment. As of March 2017, the Just Gold project had exported about 1,429 grams (1.4 kilograms) of ASM gold. Table 2 shows the two pilot projects USAID had initiated as of June 2017 to develop and expand existing traceability schemes for ASM gold produced in the DRC. USAID progress reports cite the limited number of buyers, including refiners, and security as key challenges affecting these pilot projects. According to USAID officials, some potential buyers are unwilling to purchase ASM gold from the DRC because of associated risks related to potential armed group interference. USAID officials also told us that the low volume of ASM gold available from the limited number of pilot mine sites poses an additional challenge to attracting buyers. In interviews, Dubai Multi Commodities Centre officials and refinery representatives in Dubai told us that, while they do not currently purchase any ASM gold produced in the DRC, they would be open to exploring options to buy such gold if they had some assurance from a partner such as USAID. USAID officials explained that their current focus is on identifying London Bullion Market Association (LBMA) refiners, who examine their supply chains more closely. Additionally, ongoing security risks affect the implementation of pilot projects, according to USAID progress reports. For example, in February 2017, one of the potential pilot sites at Matete was attacked, leading to suspension of on-site activities. According to USAID documents, an armed group of approximately 30 men attacked several buildings in Matete, killing one military soldier and taking hostages. Contractor staff were immediately moved to another compound for security and were subsequently evacuated, unharmed. In 2016, USAID established a pilot projects target of developing traceability and due diligence schemes for ASM gold at 25 mine sites. USAID met this target in December 2016, having developed traceability and due diligence schemes at 26 mine sites, primarily through the Just Gold project. USAID officials told us that they are supporting the development of a traceability scheme for ASM gold so that gold from validated sites can comply with ICGLR standards. Other entities—OECD, ICGLR, and LBMA—have undertaken regional initiatives to encourage the responsible sourcing of gold. OECD guidance. Since 2012, OECD has developed guidance on encouraging responsible supply chains for ASM gold. For example, the guidance notes that stakeholders should engage in legalizing and formalizing the artisanal mining communities to encourage conflict- free sourcing. In 2016, OECD reported that implementation of Section 1502 of the Dodd-Frank Act had increased awareness about the supply chain of conflict minerals in the region. ICGLR regional certification mechanism. In 2010, ICGLR developed a regional certification mechanism to ensure that conflict minerals, including gold, are fully traceable. However, two reports we reviewed raised concerns about the validity of ICGLR certificates issued to comptoirs exporting ASM gold, given that traceability schemes for ASM gold are lacking. In addition, UNGoE, OECD, and DRC officials told us that the ICGLR’s mechanism has not been fully implemented and is not adequately monitored owing to limited incentives for member states to accomplish regional goals. LBMA accreditation. In 2012, LBMA, which represents the global market for gold and silver, established its “Responsible Gold Guidance” to ensure that the gold refiners it accredits purchase only conflict-free gold. According to LBMA, compliance with this framework is mandatory for all refiners wishing to sell into the London bullion market. USAID officials told us that USAID is seeking to identify buyers from the LBMA refiners for its ASM gold pilot projects. Since we reported in August 2016, a USAID-funded, population-based study published in 2016 has provided additional data on sexual violence in the DRC. In addition, as we previously reported, population-based surveys on sexual violence are under way or planned in two adjoining countries, Burundi and Uganda. We also identified some new case-file data on sexual violence in the DRC and adjoining countries; however, as we reported previously, case-file data on sexual violence are not suitable for estimating an overall rate of sexual violence. Finally, a 2017 UN report indicates that the DRC government has made some progress in addressing sexual violence. We identified a USAID-funded, population-based study surveying the rate of sexual violence in the eastern DRC that had been published since August 2016. Published in September 2016, the study used data collected in June and July 2016 to estimate that 31.6 percent of women and 32.9 percent of men reported exposure to some form of sexual and gender-based violence in their lifetime. Among women who were exposed to sexual violence, 12.7 percent reported exposure to conflict-related sexual violence, while 87.4 percent reported exposure to community- based sexual violence. Among men who were exposed to sexual violence, 68.1 percent reported exposure to conflict-related sexual violence, while 31.9 percent reported exposure to community-based sexual violence. Table 3 summarizes the results of this and other selected, population- based surveys of the rate of gender-based sexual violence in the DRC that have been published since 2008. The surveys’ results are not directly comparable because of variations in the periods of reported incidents, the genders and ages of survey participants, and the geographic areas covered, as well as the definitions of sexual violence used. For example, while the August 2010 survey estimated the rate of sexual violence over a lifetime, other surveys estimated the rate of sexual violence over both a lifetime and a 12-month period. Additionally, some surveys collected information only on women, while others surveyed both men and women. In addition to these studies of sexual violence in the eastern DRC, population-based surveys in Uganda and Burundi are under way or planned, as we previously reported. According to ICF International, fieldwork for the 2016 Uganda Demographic and Health Survey is now complete, and the final report is expected in October 2017; fieldwork for the 2016 Burundi survey is currently ongoing, and the final report is expected in December 2017. Figure 3 shows the anticipated publication dates for population-based surveys on sexual violence that are currently under way or planned in Uganda and Burundi. The figure also shows the publication dates for the population-based surveys, with data on rates of sexual violence in the eastern DRC, Rwanda, and Uganda that have been published since we started reporting on sexual violence in the region in 2011. Since we reported in August 2016, State and UN entities have provided additional case-file information about instances of sexual violence in the DRC and adjoining countries. State’s annual country reports on human rights practices provided the following case-file data pertaining to sexual violence in the DRC, Burundi, Rwanda, and Uganda: DRC. In 2016, the United Nations documented 267 adult victims and 171 child victims of sexual violence in conflict. This violence was perpetrated by illegal armed groups as well as state security forces and civilians and was concentrated in North Kivu Province, according to State. Burundi. One organization—Seruka Center—working with victims of sexual violence in Bujumbura reported 1,288 cases of sexual assault during 2016. According to State, the actual number of rapes was likely higher, given factors that prevent women and girls from seeking medical treatment. Another organization—Humura Center— responsible for investigating cases of sexual violence and rape received 160 cases of sexual and gender-based violence in 2016, according to State. Rwanda. In 2016, Rwanda’s National Public Prosecution Authority reported 190 cases of rape. According to State’s report, domestic violence against women in 2016 was common, but most incidents were not reported or prosecuted. Uganda. State’s 2016 report reiterated that rape remained a serious problem throughout the country and that the government did not consistently enforce the law. As we noted previously, the police crime report through June 2015, the most recent available, registered 10,163 reported sexual offenses. In addition, UN entities reported the following case-file data about sexual violence in the DRC and Burundi: DRC. Data collected by the Congolese government with support from the UN Population Fund indicate that from January 2016 through March 2017, gender-based violence service providers responded to at least 24,364 incidents of gender-based violence. Women and girls were the victims in 97 percent of the reported cases in 2016. In addition, in 2016, the UN Organization Stabilization Mission in the DRC, known as MONUSCO, verified 637 cases of conflict-related sexual violence, with illegal armed groups responsible for 74 percent of cases, and state security forces, mainly FARDC, responsible for the remaining 26 percent of cases. Burundi. In 2016, UNHCR—the UN Refugee Agency—reported 2,250 gender-based violence incidents targeting refugees in neighboring countries, with 23 percent of incidents occurring in Burundi or en route from the country. Since 2013, the DRC government has made some progress in addressing sexual violence in the eastern DRC, according to a 2017 UN report. The report notes improvements in the capacity of DRC state security forces to address sexual violence in the following respects: adoption of codes of conduct prohibiting sexual violence; investigation of alleged incidents in order to hold perpetrators accountable; and formation of specialized police units capable of addressing sexual violence. In addition, law enforcement measures such as arrests and prosecutions have increased, and training for the military has improved, according to an official from the UN Special Representative of the Secretary-General on Sexual Violence in Conflict. This official also noted that in 2014, the DRC government appointed a Personal Representative to the President on Sexual Violence and Child Recruitment to advise the President on sexual violence issues, ensuring that sexual violence remains on the government’s agenda. More recently, the DRC government and the United Nations have expressed interest in exploring linkages between mining and sexual violence, according to this official. However, the official told us that while reports suggest a link between mining and sexual violence in the region, the UN and the DRC government have not been able to prove such a linkage because of limited resources for travel to the areas where mining occurs and the limited availability of women with specialized knowledge to investigate these issues. We provided a draft of this report to the SEC, State, and USAID for comment. State and USAID provided technical comments, which we incorporated as appropriate. SEC did not provide comments. We are sending copies of this report to appropriate congressional committees and to the SEC, State, and USAID. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In this report, we provide information about (1) the supply chain for gold produced through artisanal and small-scale mining in the Democratic Republic of the Congo (DRC); (2) efforts by the DRC and the U.S. government and others that may encourage the sourcing of conflict-free artisanal and small-scale mined (ASM) gold; and (3) sexual violence in the eastern DRC and neighboring countries that has been published since August 2016, when we last reported on this topic. To address our first two objectives, we reviewed key reports and documents from the U.S. Agency for International Development (USAID), international organizations, and nongovernmental organizations (NGO). We reviewed these reports for methodological rigor, relevance, and timeliness to ensure that they were sufficiently reliable to support their own conclusions or conclusions we made based on their work. We also reviewed these reports’ methodologies related to site selection, sources and quality of evidence, and the nature and timing of fieldwork in the DRC. We reviewed U.S. agency documents, such as a 2015 USAID- funded report related to conflict minerals in the DRC, as well as USAID internal documents that included a program implementation plan and annual and quarterly internal progress reports on responsible sourcing of ASM gold in the DRC. We also reviewed annual reports by the United Nations Group of Experts (UNGoE) from 2011 through 2016, annual baseline reports on the DRC by the Organisation for Economic Co- operation and Development (OECD) from 2014 and 2015, and reports by NGOs such as the Enough Project and Global Witness. (For a complete listing of the documents we reviewed, see the bibliography at the end of this report.) In reviewing these reports, we focused on discussion of the ASM gold supply chain; associated barriers and incentives, if any; and efforts to encourage responsible sourcing. We also interviewed Department of State (State), USAID, and United Nations (UN) officials and OECD and NGO representatives. We traveled to Dubai, United Arab Emirates, where we interviewed officials from the Ministry of Economy and Dubai Multi Commodities Centre, representatives of gold refineries, accounting firms, local traders, and jewelers. We interviewed DRC government officials in Santa Clara, California, and Washington, D.C., regarding the local supply chain for gold and efforts to ensure responsible sourcing. To address our third objective, we identified and assessed any information on sexual violence in eastern DRC and the three adjoining countries—Rwanda, Uganda, and Burundi—that had been published or become otherwise available since we issued our August 2016 report on sexual violence in these areas. We discussed the collection of sexual violence-related data in the DRC and adjoining countries, including population-based survey data and case-file data, during interviews with State and USAID officials and with NGO representatives and researchers whom we interviewed for our prior review of sexual violence rates in eastern DRC and adjoining countries. We also interviewed an official from the UN Special Representative of the Secretary-General on Sexual Violence in Conflict. We conducted this performance audit from August 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Democratic Republic of the Congo (DRC) is a vast, mineral-rich nation with an estimated population of about 75 million people and an area that is roughly one-quarter the size of the United States, according to the United Nations. Nine countries adjoin the DRC. Figure 4 shows the DRC provinces and adjoining countries. In addition to the individual named above, Godwin Agbara (Assistant Director), Farahnaaz Khakoo-Mausel (Analyst-in-Charge), Andrew Kurtzman, Elisa Yoshiara, Reid Lowe, Justin Fisher, Michael Hoffman, Grace Lui, and Neil Doherty made key contributions to this report. Bafilemba, Fidel, and Sasha Lezhnev. Congo’s Conflict Gold Rush: Bringing Gold into the Legal Trade in the Democratic Republic of the Congo. Washington, DC: Enough Project, April 2015. Blore, Shawn. Capacity Building for a Responsible Minerals Trade (CBMRT): Working with Producers to Responsibly Source Artisanal Gold from the Democratic Republic of the Congo. Washington, D.C.: U. S. Agency for International Development, May 2015. Dranginis, Holly. Going for Gold: Engaging the Jewelry Industry in Responsible Gold Sourcing in Africa’s Great Lakes Region. Washington, D.C.: Enough Project, November 2014. Global Witness. River of Gold: How the State Lost Out in an Eastern Congo Gold Boom, while Armed Groups, a Foreign Mining Company, and Provincial Authorities Pocketed Millions. London, United Kingdom: July 2016. Global Witness. City of Gold: Why Dubai’s First Conflict Gold Audit Never Saw the Light of Day. London, United Kingdom: February 2014. Kelly, Jocelyn T.D. “‘This Mine Has Become Our Farmland’: Critical Perspectives on the Coevolution of Artisanal Mining and Conflict in the Democratic Republic of the Congo.” Resources Policy, vol. 40 (January 2014): 100-108. Mthembu-Salter, Gregory. Baseline Study One: Musebe Artisanal Mine, Katanga, Democratic Republic of Congo. Paris, France: Organisation for Economic Co-operation and Development, May 2014. Mthembu-Salter, Gregory. Baseline Study Two: Mukungwe Artisanal Mine, South Kivu, Democratic Republic of Congo. Paris, France: Organisation for Economic Co-operation and Development, November 2014. Mthembu-Salter, Gregory. Baseline Study Three: Production, Trade and Export of Gold in Orientale Province, Democratic Republic of Congo. Paris, France: Organisation for Economic Co-operation and Development, May 2015. Mthembu-Salter, Gregory. Baseline Study Four: Gold Trading and Export in Kampala, Uganda. Paris, France: Organisation for Economic Co- operation and Development, May 2015. Organisation for Economic Co-operation and Development. Report on the Implementation of the Recommendation on Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High- Risk Areas. Paris, France: April 28, 2016. Organisation for Economic Co-operation and Development. Mineral Supply Chains and Conflict Links in Eastern Democratic Republic of Congo: Five years of Implementing Supply Chain Due Diligence. Paris, France: November 19, 2015. Partnership Africa Canada. All That Glitters Is Not Gold: Dubai, Congo and the Illicit Trade of Conflict Minerals. Ottawa, Canada: May 2014. Southern Africa Resource Watch. Congo’s Golden Web: The People, Companies and Countries That Profit from the Illegal Trade in Congolese Gold. Johannesburg, South Africa: May 2014. Stearns, Jason, and Christoph Vogel. The Landscape of Armed Groups in the Eastern Congo. New York, N.Y.: Congo Research Group, December 2015. United Nations. Letter Dated 28 December 2016 from the Group of Experts on the Democratic Republic of the Congo Addressed to the President of the Security Council. New York, N.Y.: December 28, 2016. United Nations. Letter Dated 23 May 2016 from the Group of Experts on the Democratic Republic of the Congo Addressed to the President of the Security Council. New York, N.Y.: May 23, 2016. United Nations. Letter Dated 12 January 2015 from the Chair of the Security Council Committee Established Pursuant to Resolution 1533 (2004) Concerning the Democratic Republic of the Congo Addressed to the President of the Security Council. New York, N.Y.: January 12, 2015. United Nations. Letter Dated 22 January 2014 from the Coordinator of the Group of Experts on the Democratic Republic of the Congo Addressed to the President of the Security Council. New York, N.Y.: January 23, 2014. U.S. Department of Interior. Conflict Minerals from the Democratic Republic of the Congo—Gold Supply Chain. Washington, D.C.: U.S. Geological Survey, October 2015. Weyns, Yannick, Lotte Hoex, and Ken Matthysen. Analysis of the Interactive Map of Artisanal Mining Areas in Eastern DR Congo. Antwerp, Belgium: International Peace Information Service, October 2016. SEC Conflict Minerals Rule: 2017 Review of Company Disclosures in Response to the U.S. Securities and Exchange Commission Rule, GAO-17-517R. Washington, D.C.: April 26, 2017. Conflict Minerals: Insights from Company Disclosures and Agency Actions. GAO-17-544T. Washington, D.C.: April 5, 2017. SEC Conflict Minerals Rule: Companies Face Continuing Challenges in Determining Whether Their Conflict Minerals Benefit Armed Groups. GAO-16-805. Washington, D.C.: August 25, 2016. SEC Conflict Minerals Rule: Insights from Companies’ Initial Disclosures and State and USAID Actions in the Democratic Republic of the Congo Region. GAO-16-200T. Washington, D.C.: November 17, 2015. SEC Conflict Minerals Rule: Initial Disclosures Indicate Most Companies Were Unable to Determine the Source of Their Conflict Minerals. GAO-15-561. Washington, D.C.: August 18, 2015. Conflict Minerals: Stakeholder Options for Responsible Sourcing Are Expanding, but More Information on Smelters Is Needed. GAO-14-575. Washington, D.C.: June 26, 2014. SEC Conflict Minerals Rule: Information on Responsible Sourcing and Companies Affected. GAO-13-689. Washington D.C.: July 18, 2013. Conflict Minerals Disclosure Rule: SEC’s Actions and Stakeholder- Developed Initiatives. GAO-12-763. Washington, D.C.: July 16, 2012. The Democratic Republic of Congo: Information on the Rate of Sexual Violence in War-Torn Eastern DRC and Adjoining Countries. GAO-11-702. Washington, D.C.: July 13, 2011. The Democratic Republic of the Congo: U.S. Agencies Should Take Further Actions to Contribute to the Effective Regulation and Control of the Minerals Trade in Eastern Democratic Republic of the Congo. GAO-10-1030. Washington, D.C.: September 30, 2010.
Over the past decade, the United States and the international community have sought to improve security in the DRC, the site of one of the world's worst humanitarian crises. In the eastern DRC, armed groups have committed severe human rights abuses, including sexual violence, and reportedly profit from the exploitation of “conflict minerals,” particularly gold. Congress included a provision in the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act that, among other things, required the Securities and Exchange Commission (SEC) to promulgate regulations regarding the use of conflict minerals from the DRC and adjoining countries. The SEC adopted these regulations in 2012. The act also included a provision for GAO to annually assess the SEC regulations' effectiveness in promoting peace and security and report on the rate of sexual violence in the DRC and adjoining countries. In April 2017, GAO reported on companies' disclosures, in response to the SEC regulations, of conflict minerals they used in calendar year 2015 (see GAO-17-517R ). In this report, GAO provides information about (1) the supply chain for ASM gold in the DRC; (2) efforts to encourage responsible sourcing of ASM gold; and (3) sexual violence in eastern DRC and neighboring countries published since August 2016, when GAO last reported on this topic. GAO reviewed U.S., UN, and nongovernment and international organizations' reports; interviewed U.S., DRC, and United Arab Emirates (UAE) officials and other stakeholders; and conducted fieldwork in Dubai, UAE. GAO is not making any recommendations. The supply chain for artisanal and small-scale mined (ASM) gold—a significant driver of the Democratic Republic of the Congo (DRC) economy—involves multiple actors, according to reports GAO reviewed and stakeholders interviewed (see figure). Officially, these actors are required to obtain DRC government authorization and pay provincial or national taxes to mine, trade, or export ASM gold, according to these sources. However, almost all DRC-sourced ASM gold is produced and traded unofficially and smuggled from the country, according to reports and stakeholders. Further, elements of the Congolese army as well as illegal armed groups, frequently exploit ASM gold, often through illegal taxes on its production and transport, according to reports and stakeholders. The DRC government, the U.S. Agency for International Development (USAID), and international organizations have undertaken several initiatives to encourage the responsible sourcing of ASM gold—that is, the production and traceability of gold that has not financed conflict or human rights abuses such as sexual violence. For example, since 2015, USAID has worked with the DRC government to implement a traceability scheme for ASM gold and has worked with Tetra Tech and Partnership Africa Canada to scale up pilot initiatives for the production and sale of conflict-free ASM gold. However, the limited number of mines validated as conflict free and the relatively high mining-related official provincial taxes in the DRC, compared with taxes in neighboring countries, provide few incentives for responsible sourcing of ASM gold, according to reports GAO reviewed. In 2016, a USAID-funded, population-based study of the rate of sexual violence in parts of the eastern DRC estimated that 32 percent of women and 33 percent of men in these areas had been exposed to some form of sexual and gender-based violence in their lifetime. According to the United Nations, the DRC government has taken some steps to address sexual violence in the eastern region.
The military services have a long history of interoperability problems during joint operations. For example, the success of the Persian Gulf war in 1991—a major joint military operation—was hampered by a lack of basic interoperability. The current certification requirement was established to help address these problems. The Joint Staff’s Director for C4 systems (J-6) is assigned primary responsibility for ensuring compliance with the certification requirement. DISA’s Joint Interoperability Test Command is the sole certifier of C4I systems. According to Joint Staff guidance, commanders in chief, the services, and DOD agencies are required to adequately budget for certification testing. They can either administer their own tests with Test Command oversight or ask the Test Command to administer them. Certification is intended to help provide the warfighter with C4I systems that are interoperable and to enable forces to exchange information effectively during a joint mission. Specifically, certification by the Test Command is confirmation that (1) a C4I system has undergone appropriate testing, (2) the applicable requirements for interoperability have been met, and (3) the system is ready for joint use. However, while a system may pass certification testing, it may not have been tested against all systems with which it may eventually interoperate. This is because some systems with which they must interoperate become available later and commanders sometimes use systems in new ways that were not envisioned during testing. DOD guidance requires that a system be tested and certified before approval to produce and field it. Depending on the acquisition category and dollar threshold of the program, the approval authority may be the Under Secretary of Defense (Acquisition and Technology), with advice from the Defense Acquisition Board; the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence), with advice from the Major Automated Information System Review Council; or the DOD component head (such as the commander in chief of a unified combatant command, the head of a military service, or a DOD agency head). A DOD Directive established the Military Communications Electronics Board to provide guidance on interoperability issues referred to it by the Secretary of Defense and the Chairman of the Joint Chiefs of Staff. The Board addresses interoperability issues through two subpanels: (1) The Interoperability Improvement Panel monitors C4I interoperability issues surfaced by the commanders in chiefs, military services, and DOD agencies and (2) The Interoperability Test Panel resolves testing disputes (such as appeals of Test Command certification decisions made by commanders in chief, military services, and DOD agencies). The Test Panel may waive the certification requirement to support developmental efforts, demonstrations, exercises, or normal operations. The waiver is not intended to be permanent, and is typically granted for 1 year. Commanders in chief, services, and DOD agencies are generally not complying with the certification requirement. As a result, we found instances in which existing, newly fielded, and modified systems are not certified for interoperability. Test Command analysis showed that a significant number of existing C4I systems had not been submitted for certification as required. According to Test Command officials, as of December 1997, the DOD Defense Integration Support Tool database of C4 systems listed about 1,000 systems that may exchange information with another system. In addition, there are about 1,176 unclassified intelligence systems, according to the Office of the Assistant Secretary of Defense, C3I. Test Command officials said they did not know precisely how many of these systems require certification. Nor did the Office of the Assistant Secretary of Defense know which intelligence systems would require certification because they were unable to determine which of these systems were outdated (i.e., legacy systems), stand alone systems, or one-service-only systems. While the Test Command has generally certified increasingly more systems during the past 4 years, officials acknowledged that “they have not even begun to scratch the surface” of the universe of systems that may require testing and certification. During fiscal years 1994 through 1997, the Test Command certified 149 C4I systems. According to Test Command officials, DOD’s Defense Integration Support Tool database attempts to list all C4 systems and other mission critical systems, but it does not contain all C4 systems or indicate whether the systems have been certified. According to DISA documentation, the purpose of the Defense Integration Support Tool is to support a DOD-wide information management requirement for data collection, reporting, and decision support in areas such as planning and interoperability. After discussions with DOD officials regarding this issue, DOD has recently included certification status as part of the database and, as of January 1998, 44 systems reflected this information. We recently reported in two separate reports that the Defense Integration Support Tool database is incomplete and inaccurate. In response to our October 1997 report, DOD acknowledged that this database is its official automated repository and backbone management tool for DOD’s inventory of systems. Accordingly, DOD said that it had begun to take major actions to enhance the database by instituting a validation and data quality program to ensure that the database contains accurate and complete data. DOD further stated that it would closely monitor this program to ensure that the data quality is at the highest level as required for reports to senior Defense managers and the Congress. Since this database is an important management tool, it is essential that it be complete and accurate. In several instances, new systems have been fielded without consideration of the certification requirement. Two recently fielded Air Force systems—a weather prediction system and a radar system—were not tested for certification by the Test Command, despite June 1996 memorandums from the Joint Staff stating that the service must plan for testing to ensure compliance with interoperability guidelines. Further, since 1994, the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence) has approved three of nine major automated information systems for production and fielding that had not been certified for interoperability. For example, the recently fielded Defense Message System was not certified by the Test Command. Test Command officials stated that the system has undergone some interoperability testing but, because of shortfalls, was not certified. A decision was made to field the system while the shortfalls are resolved. Test Command officials believe the system will eventually be certified. No newly developed systems purchased through the Command and Control Initiatives Program were tested by the Test Command. (This program allows commanders in chief to purchase low-cost improvements to their command and control systems.) According to DISA officials, DISA had assessed these systems’ interoperability requirements and reminded the users to submit the systems for testing. In addition, during the last 3 years, no systems purchased through the Advanced Concept Technology Demonstrators program were tested and certified. (This program allows a new capability to be quickly developed, purchased, and exercised in the field before an acquisition commitment is made.) According to Test Command officials, previously certified systems that were later modified are not consistently submitted for recertification as required. Although Test Command officials do not know the exact number of modified systems that require recertification, they are aware of several systems—such as the Navy’s AEGIS shipboard weapon system and the Air Force’s Airborne Warning and Control System. Joint Staff officials believe that, although the certification requirement is outlined in several DOD and Joint Staff guidance documents, some system managers are unaware of it. In a study chartered by J-6 and completed in January 1996, only 12 of 424 (less than 3 percent) surveyed acquisition managers and Defense System Management College students knew about the DOD and Joint Staff interoperability requirements. The study team found that this lack of knowledge prevented users from placing interoperability in the initial requirements documents and acquisition managers from building interoperability into approved programs. As a result, the Joint Staff began an effort in 1996 to better educate system managers about the requirement. However, the study points out that education is not a panacea for all interoperability problems. Our analysis showed that some DOD organizations, although aware of the requirement, did not submit fielded systems for testing. For example, some program managers did not submit their modified systems for certification because they believed their design, although fielded, was not mature enough for testing. The program managers did not seek a waiver for their systems and ignored the certification requirement. Test Command officials told us that they lack the authority to compel program managers to bring their systems in for testing and must rely on the managers’ cooperation. In addition, in fiscal year 1995, only three intelligence systems were certified by the Test Command. Because Test Command officials believed that DOD’s intelligence community was ignoring the certification requirement, in 1996 the Command negotiated an agreement with DOD’s Intelligence Information Systems Management Board (which has responsibility for a portion of intelligence systems) to facilitate better participation in the certification process. In fiscal year 1997, the number of intelligence systems tested and certified increased to 14. Test Command officials believe that the increase is a direct result of the agreement. Further, according to Test Command officials, DOD officials do not always budget the resources needed for interoperability testing as required by Joint Staff guidance. In certain cases, the services do not budget sufficient funds to cover secondary C4I systems that are used to test the primary C4I system for interoperability because the services cannot afford to pay for all the testing DOD policy requires. For example, the services are required to provide secondary systems for 10 tactical data link interoperability tests a year. In this case, however, according to a Test Command official, the Army budgets for only seven or eight tests a year. The services are responsible for acquiring systems that satisfy service-unique requirements, and this responsibility sometimes takes precedence over satisfying joint interoperability requirements. In his 1996 report to the Secretary of Defense, the Chairman of the Joint Chiefs of Staff recommended that funding for DOD C4I systems be reviewed, since the services’ funding decisions may not further DOD’s overall goal of promoting C4I joint interoperability. Finally, the various approval authorities are allowing some new systems to be fielded without verifying their certification status. According to a Joint Staff J-6 spokesman, the Joint Staff J-6 representative is to ensure that interoperability certification is addressed at the approval authority acquisition meetings. If the Joint Staff J-6 representative is unable to attend these meetings, the issue of certification is not raised. However, J-6 coordination is obtained on all acquisition decision memorandums granting production and fielding approval. Nevertheless, systems receive approval for production and fielding even though they may not have been certified or obtained waivers. In several instances, the Test Command identified interoperability problems in systems that DOD organizations had not submitted for testing. The following are examples: In 1996, the Test Command expressed concerns to the Air Force that its Joint Tactical Information Distribution System, a computer terminal used to provide surveillance data on F-15 aircraft, had not been certified. The system (a proof of concept demonstration) had operated for 3 years. According to a Test Command memorandum, Command representatives witnessed numerous interoperability problems caused by this system during joint exercises. The memorandum indicated that if the exercise had been a real world situation, the system’s interoperability problems could have resulted in numerous deaths of pilots and enemy penetrations of U.S. airspace. In a written response, the Air Force stated that it disagreed with the Test Command’s assessment of the problems. Furthermore, the Air Force said that certification of the system was not the best use of resources because the Air Force planned to eventually replace it. According to Test Command officials, the system is scheduled for testing in 1998. Still not certified, the system has been operational for over 1 year since the Air Force’s response. Test Command officials have been unable to persuade the Navy’s AEGIS program office to submit all fielded versions of the ship’s weapon system for interoperability testing. Command representatives have observed the weapon system experiencing significant interoperability problems in several recent joint exercises. The Test Command is aware of five fielded versions of AEGIS software, and the program office states there are many more. However, the Test Command has tested and certified only the oldest version (in May 1995), the most basic of the five versions. The need for interoperability certification testing of the uncertified versions has been discussed at joint interoperability meetings and with DISA. The responsible DISA official requested, under Test Command letterhead, that AEGIS submit uncertified versions for joint testing. However, according to AEGIS program officials, none of these versions has been jointly tested because the newer versions either have not yet been tested with other Navy-only systems or are not yet demonstrating adequate interoperability performance in testing with Navy-only systems. The Test Command has been unable to persuade users to test DOD’s Air Defense System Integrator, which provides tactical data link translation and message-forwarding functions. The system has been acquired outside the normal DOD acquisition process. About 30 versions of this system have been fielded; none has been jointly tested. According to Test Command officials, the system is experiencing significant interoperability problems because it does not conform to required standards. Interoperability problems with this system could result in hostile systems leaking through U.S. defenses or friendly systems being attacked. Without certification of the interfaces that translate and forward messages among systems, for example, the proper tracking and targeting information may not be provided to our theater air missile defense system. At several 1997 meetings with representatives from all the services, the Joint Staff, and the Test Command, problems with the system were discussed. Solutions are still being developed and implemented. Noncompliance with interoperability testing and certification stems from weaknesses in the certification process itself. For example, DOD lacks a complete and accurate listing of C4I systems requiring certification and a plan to prioritize systems for testing. As a result, the Test Command may not be focusing its limited resources on certifying the most critical systems first. The process also does not include a mechanism to notify the services about interoperability problems identified in joint exercises, and the Test Command has only recently begun to contact the services regarding the noted problems. Finally, according to a Test Panel official, the Panel does not have a formal process to inform DOD organizations that systems with expired waivers require an extension or certification. Neither the Joint Staff nor DISA has given the Test Command a priority list for testing C4I systems. As a result, the Command tests systems without regard to systems that should receive a high priority for testing. Test Command officials believe that such a list would help them better plan their test schedule. Generally, the Command develops a master test schedule based on the notification of systems ready for testing by the commanders in chiefs, services, and DOD agencies. As these notifications are received, the Command updates its schedule. Furthermore, DOD has not identified the exact number of systems to be certified. A Command official told us that, even if systems are identified, it is difficult to test all C4I systems required to be certified. According to Test Command officials, they are able to test no more than 200 systems per year. Our analysis shows that the Command generally reviews about 100 systems per year and in 1997 certified 44 individual systems for interoperability (not including systems receiving multiple certifications due to modifications or testing with additional systems). According to the official, a list prioritizing systems for testing would assist the Command to use its scarce resources to test the most important systems first. In June 1996, the Military Communications Electronic Board reviewed existing command and control systems submitted by the services and determined that 42 were crucial to the needs of military commanders. Our analysis showed that, as of October 1997, 23 had not been tested or certified. According to Test Command officials, the 23 systems were not certified for various reasons. The officials stated that they did not know about 13 of the systems; 7 are scheduled or are to be scheduled for testing, but the schedules could slip; 2 were not submitted for testing by the commanders in chief, service, or DOD agency because 1 is a low priority for testing and the other needs redesign (although both have been operational for several years); and 1 was considered too immature to test. Without an approved DOD-wide testing strategy, the Test Command’s scarce resources may not be best used to test the right C4I systems at the right time. Joint Staff, Test Command, and commander in chief officials believe that one area that should receive high priority in any plan for interoperability testing is theater air and missile defense systems. This functional area is heavily dependent on systems being interoperable. According to Test Command officials, about 100 major systems are involved in theater air and missile defense, and about 45 percent of these have not been tested or certified for interoperability. DOD officials stated that significant interoperability problems in these defense systems could have dire consequences for joint and coalition forces. Some joint exercises conducted during the last 2 years have demonstrated the need for better interoperability in this functional area. Interoperability problems in these exercises resulted in the simulated downing of friendly aircraft in one exercise and in the nonengagement of hostile systems in another. Test Command officials stated that they do not generally advise services’ system program managers on interoperability problems identified in exercises. While not required to do so, the Test Command is in the best position to advise the commanders in chief, services, and DOD agencies because according to Command officials they discover, evaluate, and document these problems. As part of its mission and apart from certification testing, the Command provides operational support and technical assistance to the commanders in chief, the services, and DOD agencies during exercises. In reports summarizing the results of four joint exercises during 1996 and 1997, the Test Command noted that 15 systems experienced 43 “significant interoperability problems”—defects that could result in the loss of life, equipment, or supplies. The vast majority of these problems were caused by system-specific software problems. Specific problems experienced included failure to accept changes in mislabeled data identifying a friendly aircraft as a hostile aircraft, thereby causing the simulated downing of a commercial airliner; excess messages overloading systems, causing system crashes and the loss of command and control resources during critical periods; improper track identification, creating the potential for either a hostile system to penetrate defenses or a friendly system to be inadvertently destroyed; and duplicate tracks distorting the joint tactical picture, denying vital information to battle managers and shooters. In table 1, we list the 15 systems that experienced significant problems and indicate their certification status. When the services’ program managers are not advised, significant interoperability problems may arise in subsequent exercises and operations. According to Test Command officials, after our inquiries the Command began exploring ways to formally track and follow up on these problems. After our visit, Command officials stated they were beginning to identify the problem systems and contact the program managers to request that systems be retested. However, as of December 1997, Command officials had contacted only three system managers, and none of the systems have been tested. According to a Test Panel official, the Panel does not have a formal process to ensure that fielded systems with expired waivers are tested. As a result, most systems with expired waivers were allowed to operate without testing or an extension of the waiver. According to Panel documents, 13 waivers have been granted since May 1994. Of the 13 waivers granted, 3 have not expired and 1 was recently extended after the original waiver had been expired for 4 months (even though the system has caused interoperability problems). The remaining nine waivers have expired. Of these nine, only three are for systems that have had some interoperability testing and certification by the Test Command. Of the remaining six systems with expired waivers, two were expired for less than a year, two were expired for more than a year, and two were expired for more than 2 years. Commanders in chief, the services, and DOD agencies are generally not complying with the C4I certification requirement. Inadequate compliance with this requirement increases the likelihood that C4I systems will not be interoperable, thereby putting lives, expensive equipment, and the success of joint military operations at greater risk. Improvements to the certification process are needed to provide better assurance that C4I systems most critical to joint operations are certified for interoperability. Better information is needed to track the status of waivers. Finally, the risks associated with operating uncertified systems in joint operations is heightened when systems are permitted to proceed into production and fielding without full consideration of the certification requirement. To ensure that systems critical to effective joint operations do not proceed to production without due consideration given to the need for interoperability certification, we recommend that the Secretary of Defense require the acquisition authorities to adhere to the requirement that C4I systems be tested and certified for interoperability prior to the production and fielding decision unless an official waiver has been granted. To improve the process for certifying C4I systems for interoperability, we recommend that the Secretary of Defense, in consultation with the Chairman of the Joint Chiefs of Staff, direct the service secretaries, in collaboration with the Director of DISA to verify and validate all C4 data in the Defense Integration Support Tool and develop a complete and accurate list of C4I systems requiring certification and Director of DISA to ensure that the status of system’s certification is added to the Defense Integration Support Tool and that this database be properly maintained to better monitor C4 systems for interoperability compliance. We also recommend that the Secretary of Defense request that the Chairman of the Joint Chiefs of Staff direct the Joint Staff (in collaboration with the commanders in chief, the services, and the Director of DISA) to develop a process for prioritizing C4I systems for testing and certification and Joint Staff (in collaboration with the commanders in chief, the services, and the Director of DISA) to develop a formal process to follow up on interoperability problems observed during exercises, report the problems to the relevant DOD organization, and inform organizations that the systems are required to be tested for interoperability. We recommend that, to improve DOD’s information on the status of waivers from interoperability certification, the Chairman of the Joint Chiefs of Staff establish a system to monitor waivers. The system should inform DOD organizations when waivers expire and request that they either seek an extension of the waivers or test their systems for interoperability. In written comments on a draft of this report, DOD generally concurred with all of our recommendations noting that a number of efforts are underway to improve the interoperability certification process. To improve the process, DOD is revising relevant policy and procedures to enhance their adequacy (in terms of clarity, enforcement, and integration of effort) and is improving the accuracy and utility of its Defense Integration Support Tool database. Agreeing with the need to prioritize systems for testing, DOD stated it will develop a process to set priorities for testing and certification. To follow up on interoperability issues learned during exercises, DOD intends to use several sources of information to develop a formal process to ensure identified problems are adequately addressed by the appropriate organizations. DOD also intends to revise the charter of the Test Panel to require quarterly review of waivers from certification testing. DOD’s comments are reprinted in appendix II. DOD also provided technical comments, which we have incorporated where appropriate. To determine whether DOD organizations were complying with the certification requirement, we analyzed DOD data on C4I systems to identify systems’ certification status. Specifically, we obtained a listing of all C4 systems in the Defense Integration Support Tool from DISA Headquarters in Arlington, Virginia, and the number of unclassified intelligence systems from the Office of the Assistant Secretary of Defense, C3I in Arlington, Virginia. We compared the systems on these lists with a list of all systems certified from October 1993 through September 1997 obtained from the Joint Interoperability Test Command in Fort Huachuca, Arizona. We also obtained a list of C4I systems included in Command and Control Initiatives Program budget proposals from October 1994 through September 1997 and a listing of C4I systems included in DOD’s Advanced Concept Technology Demonstrators program. We compared these lists with the Test Command’s list of certified systems. We did not verify the accuracy or validity of any DOD list. We also obtained, reviewed, and analyzed DOD policy, Joint Staff instructions, and other documents regarding compatibility, interoperability, and integration of C4I systems. We obtained these documents and discussed interoperability issues in the Washington, D.C., area in interviews with cognizant officials from the Office of the Deputy Under Secretary of Defense (Advanced Technology); the Office of the Assistant Secretary of Defense, C3I; the Office of the Director, Operational Test and Evaluation; the Joint Chiefs of Staff Directorate for C4 (J-6); the Directorate for Force Structure, Resources and Assessment (J-8); and DISA. In addition, we reviewed documents and interviewed cognizant officials regarding interoperability issues, including certification of C4I systems, from the U.S. Atlantic Command, Norfolk, Virginia; U.S. Central Command, MacDill Air Force Base, Florida; U.S. Pacific Command, Camp Smith, Hawaii; U.S. European Command, Germany; the Naval Center for Tactical Systems Interoperability, San Diego, California; U. S. Army Communications and Electronics Command, Fort Monmouth, New Jersey; and individual system program offices or support activities in each of the military services, including the Navy AEGIS program office, Dahlgren, Virginia; the Air Force Air Combat Command Directorate of Operations for Command and Control and Intelligence, Surveillance, and Reconnaissance, Langley Air Force Base, Virginia; the Army Communications and Electronics Command Software Engineering Center, Fort Monmouth, New Jersey; and the Naval Air Warfare Center, Weapons Division, Point Mugu, California. To determine whether improvements were needed in the certification process, we interviewed Test Command officials on interoperability and certification issues, including testing priorities and exercise problem follow-up, and compared the Command’s list of certified systems from October 1993 through September 1997 with a June 14, 1996, list of DOD’s crucial C2 systems. We also reviewed reports on lessons learned and demonstrations and exercises obtained from the Joint Staff J-8 and the Test Command, respectively, to identify C4I systems with interoperability problems. We then compared the problem C4I systems with the Test Command’s certification list to analyze whether the systems were certified, uncertified, or modified and not recertified. We also interviewed officials and obtained and analyzed waiver documents from the Military Communications Electronics Board’s Interoperability Test Panel. We reviewed the waivers to determine the reasons for them and the time period involved. Finally, to determine initiatives that affect interoperability, we reviewed DOD’s C4I for the Warrior concept; the Defense Information Infrastructure Master Plan; the 1996 assessment of combat support agencies report by the Chairman of the Joint Chiefs of Staff; the 1996 Command, Control, Communications, Computer, Intelligence, Surveillance, and Reconnaissance Task Force reports; and the Levels of Information System Interoperability reports by the Task Force. We conducted our review from January 1997 to January 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force and other appropriate congressional committees. Copies will also be made available to others on request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. Improving ways of complying with the certification process alone will not solve all of the issues related to interoperability. The Department of Defense (DOD) has a number of initiatives underway that address various aspects of interoperability: the C4I for the warrior concept; the Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance Architecture Framework; the Defense Information Infrastructure strategy; and the Levels of Information Systems Interoperability initiative. Initiated in 1992, the C4I for the warrior concept is to provide a global command, control, communications, computer, and intelligence system that directly links and supports the combat troops of all services who engage in military operations. The system will display anywhere around the world a real-time, true picture of the battlespace, detailed mission objectives, and a clear view of enemy targets. This advanced technology concept is to support DOD’s vision for the evolution of the U.S. armed force’s capabilities to the year 2010. The Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance Architecture Framework, published in June 1996 by the DOD Integration Task Force, is to address a DOD-wide lack of a shared understanding of the architecture process and insufficiently precise terminology. According to the Task Force, architectures can be a key factor in guiding and controlling the acquisition and evolution of interoperable and efficient C4I systems. If adopted, the framework will provide a common approach for the commanders in chief, the services, and DOD agencies to follow in developing their C4I architectures. The Task Force report stated that the framework has, in part, the ultimate potential of “facilitating, improving, and ensuring compatibility, interoperability, and integration among command, control, communications, computers, intelligence, surveillance, and reconnaissance capabilities.” While a final report was issued in June 1996, the framework has not been implemented as DOD policy. Currently, adoption of the framework in DOD policy is not planned according to a Joint Staff official. A current version of the framework itself was issued in July 1998. However, a J-6 official expects full implementation to take 1 to 2 years after its publication. DOD issued a Defense Information Infrastructure master plan in November 1994 to integrate its communications networks, computers, software, databases, applications, weapon system interfaces, data, security services, and other services that meet DOD’s information processing and transport needs. The plan is updated periodically and provides a description of the Defense Information Infrastructure’s major components. The infrastructure is largely an unintegrated collection of systems with unique characteristics. These systems support a hierarchical, vertical military chain of command structure. They were not designed to support joint operations and are therefore limited when information requirements are based on horizontal or functional sources. The current infrastructure inhibits interoperability necessary to give commanders a unified picture of the battlespace, reduces ability to provide links between the battlefield and the support base, and limits connection to the U.S. industrial base. One part of the Defense Information Infrastructure plan is to establish a common operating environment that provides integrated support services and corresponding software for standard functional applications. The idea for the common operating environment originated with an observation about command and control systems. Certain functions (mapping, track management, and communication interfaces, for example) are so fundamental that they are required for virtually every command and control system. Yet, in stand-alone systems across DOD, these functions are built over and over again in incompatible ways, even when the requirements are the same or vary only sightly between systems. The common operating environment is intended to standardize the underlying computing infrastructure used to process information. It is to improve interoperability by creating architecture principles that, if adhered to, will allow for the sharing of software products and services and information across the Defense Information Infrastructure. Both the Defense Information Infrastructure plan and the common operating environment are long-term strategies that extend through the year 2010. Finally, DOD’s 1993 Levels of Information Systems Interoperability initiative is to improve C4 and intelligence systems’ interoperability. System developers are to use this tool to assess interoperability, determine capabilities needed to support system development, and determine the degree of interoperability needed between C4I and other systems. The tool has not yet been fully tested or implemented. Major testing is planned for July 1998. Concerns regarding the success of some of these initiatives have been expressed by various DOD organizations. Specifically, in its June 1996 report, the DOD Integration Task Force stated that compliance with the common operating environment standards will not ensure that systems will be interoperable because, in part, it does not eliminate the problems of data translation, remapping, and duplication. Further, Test Command officials and others believe the DOD Information Infrastructure and common operating environment requirements need refinement before they can ensure interoperability. For example, these officials believe that the level of compliance with the infrastructure and the common operating environment must be higher than currently required to ensure interoperability. In addition, in a December 1996 report, the Chairman of the Joint Chiefs of Staff listed several challenges to achieving interoperability through DOD’s initiatives, including security of the infrastructure, overall integration of the DOD organizations into a common operating environment, and the lack of a formal enforcement mechanism to ensure the services conform to the standards. George Vindigni Yelena K. Thompson David G. Hubbell The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) whether Department of Defense (DOD) organizations are complying with interoperability testing and certification requirements for command, control, communications, computers, and intelligence (C4I) systems; and (2) what actions, if any, are needed to improve the current certification process. GAO noted that: (1) DOD does not have an effective process for certifying existing, newly developed, and modified C4I systems for interoperability; (2) many C4I systems have not been certified for interoperability and, in fact, DOD does not know how many require certification; (3) improvements to the certification process are needed to provide DOD better assurance that C4I systems critical to effective joint operations are tested and certified for interoperability; (4) DOD organizations are not complying with the current interoperability testing and certification process for existing, newly developed, and modified C4I systems; (5) according to Test Command officials, many C4I systems that require interoperability testing have not been certified or have not received a waiver from the requirement; (6) the extent of this noncompliance could have far-reaching effects on the use of such systems in joint operations; (7) noncompliance with interoperability testing and certification stems from weaknesses in the certification process itself; (8) while DOD guidance requires that all new systems be certified or obtain a waiver from certification testing before they enter production and fielding, systems proceed to these latter acquisition stages without being certified; (9) this occurs, in part, because Defense Information Systems Agency (DISA) Joint Interoperability Test Command officials lack the authority to compel DOD organizations to submit their C4I systems for testing; (10) although DOD guidance spells out a specific interoperability certification requirement, many DOD organizations are unaware of it; (11) others simply ignore the requirement because it is not strictly enforced or because they do not adequately budget for such testing; (12) another fundamental weakness in the process is the lack of a complete and accurate listing of C4I systems requiring certification and a plan to prioritize systems for testing; (13) as a result, the Test Command may not be focusing its limited resources on certifying the most critical systems first; (14) prioritization is important since the Command has reviewed only about 100 systems per year, and a requirement for recertification of modified systems continually adds to the number of systems requiring certification; and (15) the process does not include notifying the services about interoperability problems, and the Test Command has only recently begun to contact the services regarding the noted problems.
Various DOJ and federal judiciary stakeholders play key roles in the federal criminal justice process, and as such, they can also have key roles in considering whether to use incarceration alternatives for a given offender or inmate. For example, in the course of the federal criminal justice process, a U.S. attorney is involved in the process of investigating, charging and prosecuting an offender, among other responsibilities. Federal defenders are called upon to represent defendants who are unable to financially retain counsel in federal criminal proceedings. PPSO, an office within the judiciary, also has responsibilities including supervising an offender pretrial or after conviction. Likewise, federal judges are responsible for determining an offender’s sentence, and, in the case of incarceration, BOP is responsible for caring for the inmate while in custody. Table 1 describes these roles in more detail. Federal laws and guidelines determine what, if any, incarceration is appropriate for offenders. Prior to passage of the Sentencing Reform Act of 1984, federal judges generally had broad discretion in sentencing. Most criminal statutes provided only broad maximum terms of imprisonment. Federal law outlined the maximum sentence, federal judges imposed a sentence within a statutory range, and the federal parole official eventually determined the actual duration of incarceration. The Sentencing Reform Act of 1984 changed the federal sentencing structure by abolishing parole for federal offenders sentenced after its effective date, and subsequent legislation established mandatory minimum sentences for many federal offenses. The Sentencing Reform Act of 1984 also established the independent USSC within the judicial branch and charged it with, among other things, developing federal sentencing guidelines. The guidelines specify sentencing guideline ranges—a range of time (in months) that offenders should serve given the nature of their offense and other factors—but also permit sentences to depart upward or downward from guideline ranges because of aggravating or mitigating circumstances. In 2005, the Supreme Court found the Sentencing Guidelines, which had previously been binding for federal judges to follow in sentencing criminal defendants, to be advisory in nature. Regardless of the guidelines’ advisory nature, judges are still required to calculate them properly and to consider the guideline ranges as well as the nature and circumstances of the offense, the defendant’s history, and the need for deterrence, among other sentencing goals. However, sentencing and, if appropriate, incarceration, are two of multiple potential steps in the federal criminal justice process. There are also opportunities to use alternatives to incarceration for certain offenders throughout the process, as illustrated in figure 1. As figure 1 shows, alternatives to incarceration are available at various steps in the federal criminal justice process from charging and prosecution through incarceration—the steps in the process included in the scope of our review. Multiple DOJ components, as well as the federal judiciary, have specific roles and responsibilities in providing these alternatives. Of the various incarceration alternatives that can be exercised at the charging and prosecution or at sentencing and incarceration stages, the use of court-involved pretrial diversion practices, specifically, can be exercised solely in those districts that have decided to adopt such practices. Tables 2 and 3 provide details on the pretrial alternatives to incarceration and those available at sentencing and after incarceration, respectively, as well as the federal stakeholders or entities involved, and their role. BOP is responsible for the custody and care of federal inmates. According to BOP data, eighty-one percent of these inmates are confined in BOP-operated correctional institutions or detention centers. The remainder are confined in secure privately managed or community-based facilities, local jails or in home confinement. BOP itself houses inmates in its 122 federal institutions and about 180 RRCs. The institutions operate at different security-level designations—minimum, low, medium, and high for institutions housing male inmates, and minimum, low, and high for institutions housing female inmates. Of BOP’s 122 facilities, 39 are minimum and low-security institutions. The security-level designation of a facility depends on the level of security and staff supervision that the institution is able to provide, such as the presence of security towers; perimeter barriers; the type of inmate housing, including dormitory, cubicle, or cell-type housing; and inmate-to-staff ratio. Additionally, BOP designates some of its institutions as administrative facilities, which house male and female inmates and specifically serve inmates awaiting trial, those with intensive medical or mental health conditions, or those who are deemed extremely dangerous, violent, or escape-prone, regardless of the level of supervision these inmates require. Table 4 depicts the number and percentage of inmates in the custody of BOP, by security level of the institution, as of February 27, 2016. As table 4 shows, more than half of BOP’s inmates are incarcerated in low and medium security institutions. DOJ and court officials we interviewed told us they consider various factors when deciding whether to use an alternative to incarceration for certain federal offenders in the early stages of the federal criminal justice process. Across all the alternatives available at or before sentencing, the 63 federal stakeholders in the 11 selected districts with whom we spoke (11 federal prosecutors, 25 judges, 12 defense counsel, and officials in 15 PPSOs) most commonly reported that they considered whether the crime involved any acts of violence and the offender’s role in the crime. These stakeholders reported that such alternatives are generally targeted to non-violent, low-level offenders. These stakeholders also reported that other factors, such as the nature of the crime, offender’s criminal history, and mental health or drug abuse issues influenced their decisions, but the extent to which these specific factors were considered varied by the type of alternative under consideration. Table 5 below and the discussion that follows identify and describe the most commonly considered factors among the federal stakeholders we interviewed, by type of alternative. Case referral to state and local prosecutors: Eleven federal prosecutors in 11 districts with whom we spoke reported that they consider the seriousness of the offense, as federal prosecution is typically reserved for cases that are considered higher level or more serious cases, such as those involving drug cartels, racketeering, and conspiracy. Prosecutorial guidelines establish the thresholds for prosecution which are set at the district level; therefore, thresholds may vary from district to district. Some prosecutors also reported considering the amount of time and resources that would have been required to prosecute these low- level, nonviolent cases. For example, four federal prosecutors noted that the amount of time used to prepare for a trial can be time-consuming, so referrals to state prosecutors can help reserve resources for the higher level or more serious cases. Pretrial Release: The 12 magistrate judges with whom we spoke most frequently reported considering the nature of the crime when considering whether to release an offender before trial. Magistrate judges also frequently reported factors such as offender’s criminal history (11 of 12), supporting ties of family and community (11 of 12), past conduct while on supervised release, such as probation (11 of 12), offender’s employment status (10 of 12), and offender’s drug addiction or abuse and mental health issues (9 of 12). For example, 2 of these 9 judges indicated that if the offender has a drug or mental health problem, they consider alternatives such as a drug or mental health treatment program instead of incarceration, or they establish conditions such that the offender is regularly tested for drugs or counseling while on pretrial release. Some magistrate judges (6 of 12) also stated they rely on recommendations from the PPSO officer in making their decision related to pretrial release. For example, 4 magistrate judges with whom we spoke stated that they rely heavily on these recommendations when deciding to release or detain an offender because the PPSO officer generally conducts a thorough pretrial investigation of the offender. Three magistrate judges also reported using information provided by others, such as the USAO or federal defenders, on the nature and severity of the crime or any extenuating circumstances, such as mental illness or drug addiction, in their decisions. Title 9 Pretrial Diversion Program: Ten of the11 federal prosecutors with whom we spoke—who have discretion over whether to use Title 9 pretrial diversion for offenders—noted that they most frequently consider the offender’s criminal history and the nature or seriousness of the offense. In particular, they reported that, generally, the program is used for first-time offenders and offenders who have committed low-level, nonviolent offenses or white collar crimes such as Social Security or mail fraud. In districts that have other alternatives, such as court-involved pretrial diversion practices, 2 of the 6 prosecutors we interviewed stated that they prefer to use these other alternatives because they provide more intensive services and supervision compared to Title 9 Pretrial Diversion. For example, a prosecutor with whom we spoke indicated that Title 9 Pretrial Diversion is not widely used because the court-involved practice provides more rigorous supervision such as weekly contacts with offenders. What is a court-involved pretrial diversion practice? In addition to Title 9 Pretrial Diversion, federal criminal justice stakeholders within some judicial districts have voluntarily established court-involved pretrial diversion practices or specialty courts that handle specific offender populations such as veterans, or those with specific problems such as substance abuse or mental health issues that appear to be the root cause of their criminal activity. Unlike traditional diversion, court-involved pretrial diversion practices vary in structure and do not uniformly result in the avoidance of a federal conviction upon successful completion. While some provide for a full dismissal of charges, others may provide for a sentence of probation or little to no incarceration. Also, unlike Title 9 Pretrial Diversion, courts are primary actors in these practices and must participate in their creation. Court-Involved Pretrial Diversion Practices: As described earlier, to obtain perspectives on court-involved pretrial diversion practices, we spoke with stakeholders in 6 districts that use such practices, and 5 districts that do not. Within the 6 selected districts that use court-involved pretrial diversion practices, the 13 judges, 6 prosecutors, 6 defense counsel, and officials in 9 PPSOs with whom we spoke identified a number of factors that led their districts to adopt such practices. Most frequently, they reported that three particular factors influenced their decision to adopt such alternatives. First, they reported that an awareness of effective state-level pretrial alternative programs influenced their decision. For example, 5 judicial branch officials (3 judges, 1 federal defender, and 1 PPSO officer) with whom we spoke in 3 of the 6 districts explained that their awareness of state-level pretrial diversion programs helped them understand how to replicate a similar program at the federal level. Further, 4 of the stakeholders with whom we spoke in 2 districts indicated that some federal judges who were former state judges involved in state pretrial diversion programs brought their past experience to the federal judicial system. Second, 11 stakeholders representing a mix of judges, federal defenders, PPSO, and USAO staff with whom we spoke in 5 of the 6 districts indicated there is a perception that offenders may commit crimes as a result of addiction to drugs, and that if the addiction were addressed, they would be unlikely to continue to commit crimes. For instance, among the judges with whom we spoke in the 6 districts, 3 indicated that many of the offenders they see in court have a substance abuse problem, which is generally linked to the crimes they commit. Given this, these judges explained that they believe that incarcerating these offenders would probably not resolve that problem. Third, 3 defenders and 3 prosecutors with whom we spoke identified a perception that continuing to prosecute and incarcerate low-level, nonviolent repeat offenders drains limited federal resources as a factor influencing their decision to establish a pretrial diversion program. These stakeholders explained that trial preparation for such prosecutions can be time- consuming and costly. Five of these 6 stakeholders noted that court- involved pretrial diversion practices can be mutually beneficial to the offender and the district by providing an opportunity for the offender to get help to change their lives for the better while helping the district to focus resources on the most serious crimes. While stakeholders in our 6 selected districts that use court-involved pretrial diversion practices identified common reasons for adopting such practices, we found that the factors stakeholders in these districts consider when determining whether to use this alternative for a given offender may vary depending on the specific criteria and design of the respective practices. For example, in the Western District of Washington, stakeholders reported that they consider factors including whether the offender’s criminal behavior is motivated by substance abuse issues, whether the offender is a resident of the district, and the number of prior felony convictions they have had, but admission to the program is not limited to a specific type of crime. In contrast, stakeholders in the Southern District of California reported that they consider similar factors but their program is specifically targeted to young offenders charged with alien smuggling and drug trafficking offenses. Within the 5 districts that have not adopted court-involved pretrial diversion programs, 3 judges, 2 prosecutors, 3 defense counsel, and officials in 2 PPSOs with whom we spoke most frequently identified a lack of interest or need for such programs as reasons why their districts have not adopted them. Some stakeholders also reported not having eligible or qualified offenders (5 prosecutors and 2 judges), a lack of resources to operate such programs (2 judges, 1 PPSO, and 2 prosecutors), or having other alternative programs available (2 prosecutors and 2 defenders). For example, stakeholders in all 5 districts explained that they do not have enough low-level, nonviolent offenders who would qualify for a court- involved pretrial diversion program to make operating a program worthwhile. Furthermore, according to 5 prosecutors and 2 judges we met with in these districts, their districts’ prosecutorial priorities focus on higher level offenders who would not qualify for this type of program. Additionally, 2 judges, 2 prosecutors, and 1 PPSO officer in 4 districts cited a lack of resources to operate a court-involved pretrial diversion program as current caseloads are already extensive. Sentencing Alternatives: For those offenders who do not go through a pretrial diversion program and are instead convicted through the normal criminal justice process, district judges may hand down sentences that involve incarceration or alternatives to incarceration, such as probation. When asked about what factors they consider when determining sentencing for an offender, 8 of 13 district judges we spoke with in our selected districts stated that they consider the federal sentencing guidelines in their decisions. The sentencing guidelines generally take into account the seriousness of the offense and the offender’s criminal history, however, because the guidelines are advisory, 6 of 13 judges noted they may choose to deviate from the guidelines and instead consider other options for sentencing, such as probation. Other common factors the judges reported considering included the offender’s personal situation, such as family and community ties (8 of 13 judges), whether the offender had a drug addiction problem (7 of 13 judges), education level and employment status (6 of 13 judges); and the recommendation from PPSO officers (6 of 13 judges). The judges also reported that the manner in which they consider these factors are very case specific and individualized. For example, of the 8 judges that consider family support as a factor in deciding whether to use a sentencing alternative, 3 explained that if the offender has strong family ties, probation would probably be a better sentence than incarceration so that the offender could get the needed support from family. Additionally, when asked a general question about what factors they consider when deciding on a sentence for an offender, 7 district judges explained that they base decisions of whether to sentence offenders to incarceration alternatives on their professional judgment regarding whether the offenders seem receptive to changing their criminal ways and working toward a better life without crime. For example, one district judge explained that he considers whether imposing a minimum of 12 months of incarceration will help to rehabilitate or deter an offender from committing future crimes as compared to offering them greater leniency through an alternative, such as probation. As figure 2 shows, based on data from AOUSC, DOJ, and the USSC on the use of alternatives to incarceration at or before sentencing, the overall use of these alternatives nationally and across the subset of districts that have adopted court-involved pretrial diversion practices has been largely consistent during the respective time periods for which they are available—from fiscal years 2009 to fiscal years 2015 for alternatives at sentencing; from fiscal years 2012 to 201 for pretrial release; and from fiscal years 2014 to 2015 for referrals to another jurisdiction. However, in performing our analysis of the data on the use of alternatives over time, we found that DOJ’s data on pretrial diversions were unreliable for two reasons. First, DOJ’s pretrial diversion data do not distinguish between Title 9 pretrial diversions and diversions that were the result of a court-involved pretrial diversion practice. As previously described, Title 9 pretrial diversions are at the discretion of the U.S. Attorney, divert offenders from prosecution into a program of supervision by the PPSO, and, if successful completed, can result in the offender not being prosecuted or a dismissal of charges. Court-involved diversion practices involve additional stakeholders—including federal judges and defense counsel—with participation generally determined by all stakeholders. Unlike Title 9 pretrial diversions, participants in court-involved diversions generally meet regularly with court officials to discuss progress. Moreover, if successful, participants in court-involved diversions may avoid prosecution or have charges dismissed, like those in Title 9 pretrial diversion, but may also receive a reduced sentence. Therefore, given the differences between these types of diversion in terms of the stakeholders involved, the level of supervision provided to offenders, and the outcomes successful completion can lead to, they are each unique types of diversion. Given DOJ’s current data entry process, however, while DOJ has data on the counts of cases that were diverted pretrial overall, DOJ cannot determine whether the diversions were through Title 9 pretrial diversion or a court-involved pretrial diversion practice. According to EOUSA officials, DOJ lacks detailed data on the type of pretrial diversion used because DOJ’s data entry processes do not allow for USAO staff to make entries according to the type of pretrial diversion used. According to EOUSA officials, the Legal Information Office Network System (LIONS)—EOUSA’s case management system—is set up so that only a single disposition code can be used by USAO staff when entering a diversion case into the system. Consequently, both Title 9 pretrial diversion cases and cases that have been diverted through court-involved pretrial diversion programs are recorded simply as pretrial diversion. EOUSA officials stated that given the volume of complex data that is already required to be entered into the system for any given case, it can be difficult to add new codes into the data entry process and ensure they are being entered correctly and consistently across all districts where the data is being entered. However, while the officials noted that they recognized the need to revise the system to improve the data and make it more specific and useful they did not identify any specific actions or plans to do so. Second, DOJ’s pretrial diversion data has limited reliability due to potential variability as to when and whether the pretrial diversion code is entered into LIONS by a USAO. According to EOUSA officials, while DOJ has established some coding policies for pretrial diversion in LIONS, it has not provided specific guidance as to when in the process USAOs are to enter the cases under the pretrial diversion disposition code. Therefore, this could result in inconsistent and unreliable data on the use of pretrial diversion. For example, according to officials, some USAOs may enter the pretrial diversion code into the system for a case when the offender enters into a diversion program, while other USAOs may wait until the offender has completed the program. The officials noted that there may not be a record of all instances in which an offender enters a pretrial diversion program, but does not successfully complete it. For instance, if an offender does not successfully complete the pretrial diversion program and the USAO subsequently files charges against the offender, the USAO may solely enter the charges filed against the offender in LIONS, but never indicate that the offender first entered a pretrial diversion program, but then did not successfully complete it. As a result, EOUSA’s data may not consistently capture the total number of instances in which such diversion is offered. EOUSA officials stated that they have not provided specific guidance on when to enter pretrial diversion codes in LIONS because of the relatively small number of diversion cases relative to the total cases handled by USAOs that would require such coding and to mitigate the likelihood of further complicating the data entry process for USAO staff. However, EOUSA officials recognized the potential value in being able to comprehensively track the data to help it determine what types of pretrial diversion are being used and in what districts. One of the key principles of the Smart on Crime Initiative is for DOJ to pursue alternatives to incarceration for low-level, nonviolent offenders, and DOJ has specifically recommended the use of court-involved pretrial diversion practices as a means of putting this principle into action. According to Standards for Internal Controls in the Federal Government, management should ensure that events are being recorded in an accurate and timely manner. Further, the standards also state that information should be recorded and communicated in a form and within a time frame that enables them to carry out their responsibilities. In addition, the recently updated standards that went into effect at the start of this fiscal year further clarify that agency management should use quality information to achieve the entity’s objectives, which can include obtaining relevant data from reliable sources that are reasonable, free from error and bias, and faithfully represent what they purport to represent. The updated standards also state that management should process the data into quality information, and use the information to make informed decisions and evaluate the entity’s performance in achieving key objectives. By taking steps to revise its case management system to separately track the use of Title 9 diversion and court-involved pretrial diversion programs, and issuing guidance to USAOs as to how and when to use them—for instance, when the offender enters the program, completes the program, or both—-DOJ would have more reliable and complete data to determine what types of pretrial diversion are being used, in what districts, how frequently, and how successfully. In turn, DOJ would also be better positioned to revise its guidance and direction, as necessary, to USAOs on how they might use pretrial diversion alternatives to more effectively support the Smart on Crime initiative. According to BOP officials, when placing inmates into incarceration alternatives they consider factors that are in accordance with BOP policy and guidance which also provides for the overall process for identifying and placing eligible and appropriate inmates into the incarceration alternatives of RRCs and home confinement. In particular, according to this policy and guidance, the eligibility requirements for an inmate’s placement into the alternatives have been set by the Second Chance Act of 2007. Moreover, according to BOP guidance, in addition to considering the basic eligibility requirements, BOP staff must consider the appropriateness of placing inmates into RRCs and home confinement as well as evaluate each inmate for their individual reentry needs, risk for recidivism, and risks posed to the community for placing them in RRCs or home confinement. For example, BOP guidance states that research has shown inmates with low reentry needs and a low risk of recidivating do not benefit from placement in a RRC and could become more likely to recidivate than if they were not placed. Therefore, according to BOP guidance, home confinement is BOP’s preferred option for inmates with low needs and of low risk. BOP’s policy and guidance lays out a multistep process for placing inmates into the alternatives once eligible inmates are identified. A variety of BOP and other officials are involved in the process such as BOP officials at the institution, Residential Reentry Managers (RRMs), contract staff at RRCs, and PPSO officials, depending on the type of alternative being considered. Figure 3 summarizes BOP’s process for placing inmates into RRCs and home confinement. BOP officials and RRC contractors with whom we spoke reported that they consider factors identified in BOP policy and guidance when attempting to place inmates into incarceration alternatives. In addition to the eligibility and appropriateness of an inmate for placement in an alternative, staff with 3 of the 4 BOP institutions and three of the four RRM offices we spoke with stated that they take into account factors such as whether the inmate has committed a sexual offense because they must consider whether the locality of the RRC the inmate may be placed in has any zoning restrictions that prohibit sex offenders from locating there. Officials at 3 BOP institutions and two RRM offices we spoke with also mentioned that they consider whether an inmate has any medical issues that may be difficult to manage in an RRC environment. Staff at all four RRM offices we spoke with indicated that another key factor they consider when placing an inmate into an RRC is the availability of bedspace within their desired placement area. Officials at three of the four contracted RRCs we met with indicated that when reviewing referrals for possible placement they also pay attention to public safety factors such as whether the inmate is a sexual offender, a member of a gang, or might otherwise pose a threat to RRC staff in general. In addition to placing home confinement eligible and appropriate inmates with contracted RRCs for monitoring, BOP’s process also allows RRMs the option to refer inmates into home confinement through a joint BOP - PPSO program known as the Federal Location Monitoring (FLM) Program. If accepted into the program by PPSO, the inmate is supervised by a PPSO officer instead of RRC staff while on home confinement. PPSO officials in six of the nine districts among our selected districts that were participating in the FLM program stated that they considered factors such as the inmate’s potential risks for public safety, such as whether the inmate is a sex offender, as well whether the inmate’s proposed living arrangement met program requirements when determining whether to accept the inmates into the program. From fiscal years 2009 through 2015, BOP increasingly placed inmates into RRCs and home confinement, with inmates designated as minimum and low security making up the two largest groups of inmates in RRCs and home confinement. According to BOP data, the total number of inmates placed into RRCs or home confinement during this period increased by about 16 percent from about 28,400 in fiscal year 2009 to almost 33,000 in fiscal year 2015. As figure 4 illustrates, relative to inmates of other security levels, minimum security inmates represented the largest numbers of inmates being placed in RRCs and home confinement overall with low security inmates representing the second largest inmate group. During the seven year period of our analysis, BOP significantly increased its use of home confinement among low and, especially, minimum security inmates. For instance, the placement of inmates into home confinement overall, either directly or subsequent to being in an RRC, increased by 67 percent—from 4,594 to 7,675—for minimum security inmates and 58 percent—from 2,060 to 3,247—for low security inmates from fiscal years 2009 through 2015. Relative to the increased use of home confinement, placement of minimum and low security inmates into RRCs grew more slowly or declined slightly. For example, in fiscal year 2009, 98 percent of minimum security inmates were placed in RRCs at some point, whereas by fiscal year 2015, the percentage had declined to 87 percent. Although the total number of low security inmates placed in an RRC at some point increased from about 8,000 in fiscal year 2009 to almost 9,100 in fiscal year 2015, the percentage overall of low security inmates placed into RRCs declined from 99 percent in fiscal year 2009 to 97 percent in fiscal year 2015. Figure 5 illustrates the relative changes in the use of RRCs and home confinement among minimum and low security inmates from fiscal years 2009 through 2015. According to BOP officials, the increased use of RRCs and home confinement is consistent with the Second Chance Act, corresponding BOP implementing guidance, and BOP goals. For example, one of the objectives of the Second Chance Act was to expand the use of alternatives as a means to assist offenders overall in reentering society and establishing a self-sustaining and law-abiding life. Similarly, BOP officials also noted that BOP issued guidance in 2010 and 2013 specifically encouraging the use of direct home confinement for lower risk inmates, in order to provide bed space at RRCs for higher risk inmates. Consequently, the officials stated that while more inmates were placed in RRCs and home confinement overall, minimum and low security inmates were specifically targeted for placement in home confinement whenever possible. Within its strategic plan, BOP has specified two measures to track placement of inmates into RRCs and home confinement. For its RRC measure, BOP aims for its institutions to place at least a certain percentage of their inmates into RRCs, with the specified target percentages varying according to their security level. For the first and second quarters of fiscal year 2015, both minimum and low security institutions exceeded the target set for them for this measure. For the second measure related to the use of home confinement, BOP aims for its Residential Reentry Management Branch to maintain 40 percent or more of home-confinement eligible inmates in home confinement. BOP has come close to—but not met—this goal. From April 2015 to September 2015, the most recent period for which data were available, the monthly percentage of home confinement eligible RRC inmates in home confinement fluctuated between 36.4 percent and 38.4 percent. According to BOP officials, it has not met its stated goal largely because of factors outside of its control, such as inmates lacking the resources and ability to locate and prepare an acceptable home location to be placed in home confinement in a timely manner. As with the increased use of home confinement in general, BOP has also increased utilization of the FLM program as a means to provide home confinement to inmates, especially for minimum security inmates, as shown in figure 6. For example, the number of minimum security inmates going into the FLM program increased from 281 in fiscal year 2009 to 592 in fiscal year 2015, a 111 percent increase. During this same time period, the total number of low security inmates going into the FLM program (both directly and subsequent to placement in an RRC) increased from 97 in fiscal year 2009 to 157 in fiscal year 2015, an increase of 62 percent. The FLM program is currently available in over half of the federal judicial districts and BOP officials have encouraged the expansion of the program into additional districts, as they noted that the program can provide cost advantages relative to home confinement through an RRC. According to headquarters PPSO officials, approximately 51 out of the 94 federal judicial districts nationwide were participating in the FLM program in fiscal year 2015, nearly double the number of districts participating in 2010. To foster further expansion of the program, BOP headquarters officials stated that they continue to discuss and encourage the expansion of the program into additional districts where possible with probation officials both at the headquarters and district level. To encourage its use, in 2013 BOP issued an internal memo for BOP staff regarding RRC and home confinement placements that stated that RRMs should consider using the FLM program for home confinement to the maximum extent possible where it is available. In terms of cost, according to BOP headquarters officials, the average cost to BOP of PPSO supervising an inmate in home confinement under the FLM program is $15 per day, whereas the average cost for a RRC to supervise an inmate on home confinement is $40 per day. Consequently, because the daily cost of home confinement through the FLM program is less than half that of home confinement through an RRC, effective utilization of the FLM program can potentially yield cost savings according to the officials. Despite BOP’s increased use of the FLM program in recent years, our interviews with BOP and PPSO officials at headquarters and within our selected districts suggest that usage may vary across districts. For example, the program may be less utilized in some areas depending on the terms of the contracts BOP has with RRC operators. Of the PPSOs in our 11 selected districts, 9 reported participating in the FLM program. Of those 9 districts, 1 reported moderate use of the program while 8 reported that the program was either underutilized relative to available capacity or that BOP had not made any referrals to the program, or if it had, BOP did not ultimately place the inmates in the FLM program. Officials in two of the four BOP RRM offices with whom we spoke noted that the program is generally used as more of a backup option to place inmates into home confinement whose desired home sites are more remote and not within the service area of an RRC. According to BOP headquarters officials, RRC contract terms may either require BOP to use the RRCs for home confinement within the RRCs’ service areas or they guarantee RRC operators a minimum quota of use of their home confinement services. The officials stated that, consequently, depending on the terms of RRC contracts in place in specific areas, RRMs may generally prefer to use RRCs for home confinement in order to first satisfy RRC contract requirements which may result in less utilization of the FLM program. BOP has also faced some instances where inmates referred to the FLM program have been rejected by a PPSO. For instance, PPSO officers in 5 of the 9 districts with whom we spoke noted that they had rejected FLM referrals from BOP at some point. The reasons for the rejections varied— 2 of the districts noted they rejected referrals because the inmates were unable to secure acceptable living arrangements; 2 of the districts stated that BOP’s referrals were deemed to be too high risk to accept (e.g., sex offenders); and 1 district rejected BOP’s referrals because they were not made using the appropriate referral process—that is, BOP did not submit the referral through BOP’s RRM. Further, an official in one of the four RRMs with whom we spoke stated that he referred inmates to the FLM program, but most were subsequently rejected by the local PPSO which told them the inmates were not appropriate for the program—for instance, because the inmates referred were higher risk than acceptable. BOP headquarters officials stated that the rejection of referrals for risk reasons is likely due to the fact that BOP and PPSO use different risk assessment tools which may result in different risk scores of inmates. The officials also noted that regardless of risk scores, the Chief Probation Officer in each district has the final discretion to accept or reject inmates as he or she deems appropriate for the district. The fiscal year 2015 interagency agreement between BOP and PPSO for the FLM program calls for BOP and PPSO to jointly develop additional plans for identifying and selecting inmates, which could help reduce rejections. According to BOP and PPSO officials, the interagency agreement itself identifies the basic criteria for identifying and selecting inmates into the FLM. Officials with BOP and PPSO at headquarters stated that they maintain an ongoing dialogue with each other about the FLM program and regularly discuss the referral process including any unique cases as well as any other related process issues or concerns. According to BOP headquarters officials, they have not issued additional formal guidance on the FLM program, beyond the interagency agreement, because the ability to participate in the FLM differs across districts depending on the workload and capacity of the PPSO. However, both BOP and PPSO officials at headquarters stated that they regularly communicate to promote use and understanding of the program across districts, as well as to help minimize and address any rejections from the program, as envisioned by the agreement. DOJ has not measured the outcomes or identified the cost implications of the Title 9 and court-involved pretrial diversion programs DOJ has decision-making power and expends resources on these incarceration alternatives, which are carried out at or before sentencing. While the department has conducted a survey to identify which USAOs use court- involved pretrial diversion practices and obtain any evaluations that USAOs have conducted, in considering the information obtained, the survey did not result in meaningful information on program outcomes. According to the official, in late 2014, EOUSA surveyed the USAOs regarding the implementation of the department’s Smart on Crime initiative. The survey asked about their use of court-involved pretrial diversion practices, such as a presentence diversion court, and, if they used such a court, whether the court was evaluated or assessed. According to the survey results, 16 out of 93 USAOs responded that their districts were using a court-involved pretrial diversion court practice and some respondents provided descriptive information on the number of participants and program operations. According to the EOUSA official, when responding to the survey question as to whether the court or practice was assessed or evaluated, only one office provided documentation—a 2013 summary that a PPSO officer compiled of the accomplishments of the district’s pretrial diversion court and the potential cost savings realized through the use of the court. EOUSA officials stated they conducted another survey of the USAOs in late 2015 that asked similar questions about the use of court-involved pretrial diversion practices, but did not include the question about whether the practice was evaluated. The officials stated that they expect to have results from the survey in the spring of 2016. According to DOJ officials, the information from these surveys is to inform a key indicator DOJ created for the Smart on Crime Initiative that tracks the number of diversion courts. However, while the data from survey responses may provide information on how many districts are using the practices, the data will not provide systematic information on the costs or outcomes associated with the use of those practices. Beyond the descriptive information gathered from the survey, DOJ has not obtained data that would help it to measure the outcomes or cost implications of the use of Title 9 and court-involved pretrial diversion programs. According to an EOUSA official, DOJ has not yet measured the outcomes and cost implications of pretrial diversion programs because it lacks the resources that would be required to conduct a comprehensive evaluation. Specifically, the EOUSA official suggested that a third party, such as a research institute, would be best suited to conduct an in-depth evaluation and that hiring such a third party would require resources that are not presently available from DOJ. Further, of our 6 selected districts that were using court-involved pretrial diversion practices, officials at the USAOs and PPSOs in 2 districts stated that they are in the process of attempting to use outside entities, such as graduate-level students or faculty at local universities, to conduct evaluations of those practices. For example, officials with the USAO in the Western District of Washington stated that they have requested grant funding to have the academic community work with them to evaluate their court-involved pretrial diversion program to determine how the program can expand. However, the funding had not yet been awarded. In the Central District of California, PPSO officials stated that they were in the process of selecting researchers from a local university to conduct a multi-year evaluation of their district’s practice, but they had not yet made the selection. EOUSA and USAO officials with whom we spoke also reported that DOJ has not yet measured the outcomes and cost implications of pretrial diversion programs because of the lack of sufficient long-term data. According to an EOUSA official, most court-involved pretrial diversion practices are relatively new; consequently, most participants in practices across the districts have not completed the programs and any subsequent supervision period, making it difficult to accurately measure long term outcomes. For example, of the 17 districts using court-involved pretrial diversion practices, 5 districts reported using such practices for 5 years or more. According to an EOUSA official, considering the relatively short time most of these practices have been in operation, the length of time required for participants to complete pretrial diversion programs (usually one to two years), and any subsequent period of post- conviction supervision that may be required afterwards, the number of participants available to evaluate who have fully satisfied all of their obligations is relatively limited. Further, according to the USAO staff we met with in the 6 districts using court-involved pretrial diversion practices, they have a general awareness of how many offenders had been placed in the alternatives and how many have successfully completed them, but they do not track these data systematically because such data are not required by DOJ to maintain caseload counts and dispositions. We recognize that tracking the data necessary and measuring the outcomes and cost implications of pretrial diversion programs would require resources and time. However, measuring and evaluating costs and outcomes would not necessarily require hiring a third party to conduct an assessment of diversion programs across all federal districts. For example, according to a PPSO official in the Eastern District of New York, judiciary officials in a number of districts that had implemented court- involved pretrial diversion programs have developed mechanisms to obtain data and measure some of the cost implications and outcomes of these programs, and were doing so without the use of a third party. For instance, judiciary officials in some districts have developed estimates of cost savings realized from the use of court-involved pretrial diversion programs, and PPSO officials in the Eastern District of New York compiled and publicly reported on these estimates in August 2015. See table 6 for the cost estimates reported by the Eastern District of New York. Additionally, judiciary officials have also tracked data related to the outputs and outcomes of the court-involved pretrial diversion practices. For instance, officials in seven districts have tracked data on the number of offenders successfully completing the programs. This information was collected and compiled by the Eastern District of New York and reported in August 2015, as shown in table 7. We have previously reported that tracking successful completion can be a proxy measure for the effectiveness of deferred prosecution and non-prosecution agreements DOJ has used in lieu of prosecuting corporations for corporate crime. Such agreements are similar in function to the type of agreements used in diverting individual offenders through pretrial diversion. In addition to these estimates and data, as another means of measuring outcomes, judiciary officials in 3 of our selected 6 districts that use court- involved pretrial diversion practices reported also informally tracking recidivism rates of participants who have successfully completed the practices. For instance, officials from the PPSO in the Southern District of California estimated a recidivism rate of 2.8 percent for individuals who successfully completed the program in their district, while PPSO officials in the Central District of California and the Central District of Illinois reported that individuals who completed the respective practices in their districts had not committed any new crimes to their knowledge. According to DOJ’s current strategic plan, one of its objectives is to reform and strengthen the country’s criminal justice system by targeting the most serious offenses for federal prosecution, and expanding the use of diversion programs, among other things. Consistent with that objective, the Smart on Crime Initiative includes, as one of its key principles, the pursuit of alternatives to incarceration for low-level, non-violent crimes. As part of the Initiative, DOJ has encouraged its prosecutors to consider the use of alternatives to incarceration and specifically encouraged more widespread adoption of diversion programs and practices such as drug courts and other specialty courts across the districts. For example, DOJ issued a memorandum in August 2013 to its USAOs that cited as examples several existing court-involved pretrial diversion practices, stated that the use of such programs or practices can be part of an effective prosecution program, and identified the potential for cost savings from the use of these programs based on experiences at various districts. Given that pretrial diversion programs can help DOJ achieve its strategic objectives and the Smart on Crime Initiative, it is important for DOJ to be able to track data related to their use and cost, and to measure their impacts. According to Standards for Internal Control in the Federal Government, among other things, agency management should establish and operate monitoring activities to evaluate results of its efforts and programs. We understand that some judicial districts may not have had pretrial diversion programs in place long enough to fully track or assess the outcomes of a large number of offenders who have completed the programs. However, according to a resource cited by the Office of Management and Budget on program evaluation, while activities such as performance measurement are useful at all stages of a program’s maturity, they can be particularly useful for providing evidence about how programs are working in the early years of a program’s history when impacts on program outcomes may not be detectable and rigorous, high- quality impact evaluations are not yet possible. By obtaining data on the costs and outcomes of pretrial diversion programs and establishing performance measures, DOJ would gain multiple advantages in its ability to manage these programs and optimize their outcomes and cost implications. First, having such data and measures available would better position DOJ to determine if pretrial diversion programs are effectively contributing to the achievement of department goals and initiatives. Second, such data and measures would better position DOJ to manage and provide additional guidance to the districts using the programs and practices, as necessary, to make their use more effective. Third, with information on the outcomes and cost implications of the existing programs, DOJ would be better positioned to determine whether and how it should encourage the use of such programs. Finally, should DOJ decide to pursue a more in-depth evaluation by an outside entity of the long term impacts and outcomes of the programs and practices, having such data and measures in place would better position DOJ to inform and facilitate that evaluation. BOP collects data on the costs of RRCs and can measure their costs, but it does not collect data that would help it to measure the outcomes of RRCs, nor does it measure their outcomes. In particular, through the contracts it has with RRC operators, BOP has data on and has the ability to track the cost of placing inmates in RRCs. According to BOP, the total cost of RRCs in fiscal year 2015 was almost $360 million. Further, BOP can calculate the average daily cost for placing an inmate into an RRC, and can compare the cost with the daily cost of housing inmates in minimum, low, medium, and high security institutions. Specifically, according to our review of BOP data, the daily cost for placing an inmate into an RRC is greater than the cost to incarcerate the inmate in a minimum security institution, but less than incarcerating an inmate in low, medium, or high security institutions. For example, in 2015, the daily per capita cost for placing an inmate in an RRC was about $71 per day. In comparison, as shown in figure 7, the daily cost in 2015 to house an inmate in minimum security was about $66 while for low, medium and high security institutions was about $80, $81, and $101, respectively. According to BOP officials, RRCs are more costly than some BOP- operated institutions because RRCs tend to be located in more urbanized areas in which it is usually more expensive to operate. Locations for RRCs are selected after BOP RRM field offices identify a need for RRC services in a specific area. Factors BOP identified as taken into consideration when locating a RRC include the number of beds needed as determined by the number of inmates projected to release to the area, prosecution trends, new initiatives, and contacts with other federal law enforcement agencies. Based on our comparison of the locations of RRCs BOP used in 2015 with Census Bureau data on urbanized areas, we found that 173 of the 175 RRCs serving adult inmates were located in urbanized areas. Further, BOP can track and report the daily costs of individual RRCs, which can vary widely due to additional factors or features specific to the RRC. For example, according to BOP data in 2015, the daily costs to BOP for placing an inmate in an RRC averaged $89 but ranged from about $45 in Oklahoma City, Oklahoma to about $164 in Brawley, California. According to BOP officials, the variation in daily costs between RRCs is due to a variety of factors, such as facility sizes/inmate bed counts, variances in programming requirements, geographic location, and services offered for special populations such as mothers with infants. BOP officials also noted that while the RRCs may generally be more expensive than incarceration in minimum security facilities, the primary reason for using alternatives such as RRCs is not to reduce immediate operational costs, but to provide inmates with an opportunity to adjust to life outside of an institution and ease their transition back into society from incarceration. Similarly, BOP can measure the costs to place inmates into home confinement. For inmates in home confinement under the FLM program, BOP officials stated that the average cost to BOP is about $15 per inmate per day. According to BOP, although the cost of home confinement varies depending on the contract terms and location, the daily cost to BOP of an inmate in home confinement is no more than 50 percent of the daily cost for an inmate placement into the supervising RRC. However, BOP officials stated they are in the process of updating their contracts to more precisely track home confinement costs through RRCs. As we reported in February 2012, BOP at the time did not require contractors who provide both RRC and home confinement services to separate out the price of home confinement services, and thus did not know the actual costs of home confinement. Consequently, we recommended that BOP establish a plan for requiring contractors to submit separate prices of RRC beds and home detention services. BOP concurred and determined that all new solicitations as of February 1, 2013, will have separate line items for RRC and home confinement services. BOP officials stated that at this time, current home confinement contracts are a mix of the two types, but that as the older contracts expire, new ones with separate line items for home confinement services will be implemented. Once all contracts have a separate line item, BOP officials stated they would be better able to identify its precise costs of home confinement going forward. While BOP can measure the overall costs of RRCs and home confinement, it does not track the information needed to help measure their outcomes and does not have such measures in place. For example, one of the goals in BOP’s strategic plan calls for BOP to, among other things, provide services and programs to address inmate needs and facilitate the successful reentry of inmates into society. As mentioned previously, as part of its strategic plan, BOP has specified two measures to track placement of inmates into RCCs and home confinement—one measuring institutions’ placement of inmates into RRCs by security level, and the other measuring the Residential Reentry Division’s placement of home-confinement eligible inmates in home confinement. However, neither of these measures assesses the outcomes of RRCs and home confinement, such as how they relate to the recidivism rates of inmates. The GPRAMA of 2010 requires agencies to have outcome-oriented goals for major functions and operations and an annual performance plan consistent with that strategic plan with measurable, quantifiable performance goals. Although GPRAMA requirements only apply at the DOJ-level, we have previously reported that they can serve as leading practices for performance planning and measurement at lower organizational levels, such as bureaus, offices, and individual programs. Specifically, the GPRAMA requires agencies to set performance goals and measures each year and measure progress against those goals. According to GPRAMA, performance measurement allows agencies to track progress in achieving their goals and provides information to identify gaps in program performance and plan any needed improvements. According to BOP, RRCs provide programs that are intended to help inmates rebuild their ties to the community and to thereby reduce the likelihood that they will recidivate. The current measures BOP tracks are useful for monitoring the near term use of RRC bedspace and home confinement relative to targets and in planning for future RRC bedspace and home confinement capacity. However, they do not yield information or insight into the potential benefits they provide after the inmates use them, or potential areas for program improvement. While BOP headquarters officials also stated that they were aware of an effort by the Office of the Deputy Attorney General to solicit an outside contractor to evaluate and measure the outcomes provided by BOP’s use of RRCs and home confinement, DOJ was unable to provide any additional information or documentation on the details of this intended evaluation. Without data or measures to assess the outcomes of RRCs and home confinement, BOP does not know whether RRCs and home confinement—programs intended, in part, to help facilitate the successful reentry of inmates into society—are contributing to its strategic goal in this area. Given the limitations of BOP’s current measures, taking additional steps to develop more outcome oriented measures could enable BOP to better track the outcomes of the alternatives in achieving BOP goals. BOP officials stated that measuring the outcomes of alternatives such as RRCs and home confinement is difficult due to methodological challenges, such as the need to designate a control group of inmates for comparison that fully accounts for the diverse characteristics and reentry needs of the inmates. We recognize the challenge in conducting such a rigorous study; however, other options are available to assess the outcomes of RRCs and home confinement that may pose fewer challenges, such as measuring how frequently offenders who have gone through RRCs or home confinement reoffend or find jobs. For example, in an August 2015 testimony, the former BOP Director cited statistics on the percentage of inmates released from federal prison who were rearrested, had their supervision revoked, or returned to federal prison within 3 years. Given that BOP has recidivism data available on former inmates, BOP may be able to develop similar statistics for inmates who had served time in an RRC or home confinement. As another approach to obtain data and develop performance measures, BOP could conduct surveys of inmates who have completed time in RRCs or home confinement to get their perspectives and feedback on the outcomes of RRCs and home confinement in helping them to transition back into the community. Further, during the course of our review, BOP headquarters officials stated that under the direction and guidance of the Office of the Deputy Attorney General, a project was initiated to contract for an analysis of BOP's current RRC model and identify specific recommendations for improvement. Among other things, this analysis is to assess the degree to which current RRC programming addresses criminogenic needs, reduces recidivism, and meets the programmatic needs of the reentering population. In addition, the analysis is to provide recommendations for monitoring performance including identifying benchmarks, goals, and performance targets to measure and monitor outcomes. According to BOP officials, the contract was signed in April 2016 and the report is expected to be released during the summer of 2016. Given the scope and intent of this analysis, its results may provide BOP insights into its use of RRCs and home confinement. However, because this analysis is still in process, it is too early to determine the extent to which the results of this analysis will be helpful to BOP in identifying potential data and measures to monitor the outcomes of RRCs and home confinement. Regardless of the measure or method BOP determines to be most appropriate, by tracking data and developing performance measures to monitor the outcomes of RRCs and home confinement, BOP would be better positioned to determine how those alternatives are contributing to its goal of helping inmates successfully reenter society, and how to adjust its policies and procedures for the use of these alternatives, as necessary and within statutory requirements, to optimize the net benefits they can provide. To help reduce the overall size and costs of the federal prison population, DOJ components such as USAOs, in coordination with judicial branch stakeholders such as PPSO and federal judges, have utilized alternatives to incarceration for low-level offenders and minimum and low security inmates at various stages of the criminal justice process. DOJ has taken some initial steps to collect data and measure its efforts for several of these alternatives. However, DOJ’s data on the use of pretrial diversions is of limited usefulness and reliability because EOUSA’s case management system does not distinguish between the different types of diversion and DOJ has not provided guidance to USAOs as to when and how pretrial cases are to be entered into the system. Additionally, because DOJ does not track data on the outcomes and costs of its pretrial diversion programs, it does not have awareness about the overall outcomes of the programs in achieving the department’s goals. Tracking the use of Title 9 diversion and court-involved pretrial diversion programs using separate codes, and issuing guidance to USAOs as to what codes to use and when to use them, would provide DOJ more reliable and complete data on the overall use of pretrial diversion across districts. Further, by taking steps to obtain and track data on the outcomes of the programs and developing performance measures for its use of pretrial diversion, DOJ would be better able to determine the extent to which the alternatives are contributing to the achievement of DOJ goals and objectives and what adjustments to policies and procedures, if necessary, may make them more effective. Moreover, for the alternatives at incarceration, because BOP has not assessed the outcomes of RRCs and home confinement, it does not know whether RRCs and home confinement, which are intended, in part, to help facilitate the successful reentry of inmates into society, are in fact doing so. By tracking data and developing performance measures for RRCs and home confinement, BOP would be better positioned to determine how these alternatives are contributing to its reentry goals, adjust policies and procedures, as needed, and optimize their benefits. To help ensure that USAOs consistently track the extent of use of all pretrial diversion alternatives, the Attorney General should direct the EOUSA to take the following two actions: revise its data system to allow it to separately identify and track Title 9 and court-involved pretrial diversion alternatives; and develop guidance on the appropriate way to enter data on the use of Title 9 and court-involved pretrial diversion alternatives, including the timing of entry and use of revised codes. To help determine if pretrial diversion programs and practices are effectively contributing to the achievement of department goals and enhance DOJ’s ability to better manage and encourage the use of such programs and practices, the Attorney General should take the following two actions: identify, obtain, and track data on the outcomes and costs of pretrial diversion programs; and develop performance measures by which to help assess program outcomes. To determine how the use of RRCs and home confinement contribute to its goal of helping inmates successfully reenter society, and to better enable BOP to adjust its policies and procedures for the optimal use of these alternatives, as necessary and within statutory requirements, the Director of BOP should take the following two actions: develop performance measures by which to help assess program identify, obtain, and track data on the outcomes of the programs; and outcomes. We provided a draft of this report to DOJ, the AOUSC, and the USSC for review and comment. The AOUSC provided written comments, which are reproduced in appendix I. The USSC did not provide comments. In an e- mail we received May 27, 2016, DOJ’s audit liaison stated that DOJ concurred with all of our recommendations and provided comments, which we incorporated as appropriate and have further addressed below. In particular, in our draft report, we recommended that EOUSA identify, obtain, and track data on the outcomes and costs of pretrial diversion programs, and develop performance measures to help assess program outcomes. The DOJ liaison stated that implementing these two recommendations would be the responsibility of the department, not EOUSA exclusively. As a result, we directed these two recommendations to the Attorney General. In addition, the liaison provided information about efforts taken in April 2016, during the course of our review, by the Office of the Deputy Attorney General and BOP to solicit an outside contractor to evaluate and measure the outcomes provided by BOP’s use of RRCs and home confinement contracts. We reviewed and incorporated this information in this report, and will continue to monitor the implementation of this contract to identify whether it meets the spirit of our recommendation. Moreover, the DOJ liaison stated that BOP does not view inmates’ placement in RRCs and home confinement as incarceration alternatives when it is done pursuant to BOP’s statutory authority. As noted earlier in the report, we acknowledge BOP’s position, but, for the purposes of this report, we consider RRCs and home confinement to be alternatives to incarceration because they allow inmates to serve a portion of their sentences outside of a prison environment. Additionally, the liaison stated that BOP had concerns about our comparison of the daily cost to house an inmate in an RRC with the daily cost to house an inmate in a minimum, low, medium, and high security institution. Specifically, BOP believed our comparison was misleading because the costs shown for BOP institutions includes the additional support costs (e.g., staffing, food, medical services) that BOP incurs when housing an inmate at one of its facilities and that such costs are not incurred by BOP when an inmate is at an RRC. The cost information we presented was taken directly from a table prepared by BOP that presents the same information for public disclosure on BOP’s website. We believe our comparison accurately reflects the total out-of-pocket costs to BOP for placing inmates in its institutions and RRCs because for the RRCs, those additional support costs are either the RRCs’ responsibilities under its contract with BOP or, in the case of medical services, are the inmates’ responsibility while at an RRC. However, to help provide context, we revised our discussion to include additional information on the support costs BOP incurs at its institutions. Further, we state in the report that one option available to BOP to assess the outcomes of RRCs and home confinement could be measuring how frequently offenders who have gone through RRCs or home confinement reoffend. In the emailed comments, the DOJ audit liaison stated that BOP does not believe recidivism data should be used as a performance measure for RRCs due to external and unique factors that may impact the likelihood an individual will recidivate, such as economic conditions. In addition, the DOJ liaison stated that recidivism indicators are a negative measurement of criminal actions that do not consider positive behavior or successful adjustment of the offender, while the re-integrative model and definition of RRC programs mandates a measure of positive behavior or adjustment which is very difficult to quantify or measure. We cited the potential use of recidivism or re-offense indicators as one example of using currently available data to attempt to assess outcomes of the use of RRCs and home confinement. In our report, we also offered other examples of potential positive outcomes or adjustments BOP could track and measure, such as tracking measures related to inmates’ ability to find jobs or the value of RRCs and home confinement to inmates in helping them to transition back into the community as shown through results of surveys of inmates who have completed time in RRCs or home confinement. We defer to BOP to determine which measures are most appropriate. While we acknowledge the challenge in establishing such measures, we continue to believe it is important for BOP to identify, obtain, and track data on the outcomes of the programs and develop appropriate performance measures in order to be better able to monitor its use of RRCs and home confinement as a means to achieve its goal of helping inmates successfully reenter society. AOUSC and DOJ also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Department of Justice; Administrative Office of the U.S. Courts; U.S. Sentencing Commission; appropriate congressional committees and members, and other interested parties. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact Diana Maurer at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix II. In addition to the contact named above, Jill Verret (Assistant Director), Pedro Almoguera, Erin Butkowski, Willie Commons, William Egar, Sally Gilley, Christopher Hatscher, Susan Hsu, Amanda Miller, and Jeff Tessin made key contributions to this report.
Since 1980, the federal prison population increased from about 25,000 to almost 200,000, as of March 2016. In part to help reduce the size and related costs of the federal prison population, DOJ has taken steps to slow its growth by pursuing alternatives to incarceration at various stages of the criminal justice process for nonviolent, low-level offenders. Senate Report 113-78 included a provision for GAO to review DOJ's management of the federal prison population. This report (1) describes factors criminal justice stakeholders consider when using incarceration alternatives at or before sentencing and identifies the extent to which those alternatives are used, (2) describes factors BOP considers when using incarceration alternatives for inmates and the extent of their use, and (3) assesses the extent DOJ has measured the cost implications and outcomes of using the alternatives. GAO analyzed DOJ and federal judiciary branch data and documents from fiscal years 2009 through 2015, and interviewed DOJ and judiciary officials at headquarters and in 11 selected nongeneralizable judicial districts about the use of alternatives. GAO selected districts to provide geographic diversity and a mix of districts using and not using the alternatives. Department of Justice (DOJ) and federal judiciary officials reported considering numerous factors when using alternatives to incarceration at or before an offender's sentencing, but DOJ does not reliably track the use of some alternatives. A variety of alternatives can be used for offenders at or before sentencing, such as referral to state and local prosecutors, pretrial release, and probation. Other such alternatives include pretrial diversion programs which divert certain offenders from the traditional criminal justice process into a program of supervision and services or into court-involved pretrial diversion practices, such as drug courts, that provide offenders an opportunity to avoid incarceration if they satisfy program requirements. DOJ and judiciary officials most commonly reported considering the presence of violence and the offender's role in the crime when determining use of an alternative at or before sentencing. Based on DOJ and judiciary data on referrals to other jurisdictions, pretrial release, and alternatives at sentencing, the overall use of such alternatives across districts was largely consistent during the periods for which data were available from fiscal years 2009 to 2015. However, DOJ data on the use of pretrial diversion is unreliable because DOJ's database does not distinguish between the types of pretrial diversions. Further, when and whether the use of the pretrial diversion is recorded into the database varies across DOJ staff responsible for entering the data. By revising its system to track the different types of pretrial diversion programs, and issuing guidance as to when staff are to enter their use into its database, DOJ would have more reliable and complete data. DOJ's Bureau of Prisons (BOP) considers statutory requirements and risk levels when placing inmates into incarceration alternatives such as residential reentry centers (RRCs, also known as halfway houses) and home confinement, and has increased its use of alternatives, particularly home confinement, in the past seven years. In addition to the basic eligibility requirements, BOP evaluates inmates' needs for reentering society, risk for recidivism, and risks to the community if placed in RRCs or home confinement. For low-risk and low-need inmates, home confinement is the preferred alternative according to BOP and BOP increased its use by 67 percent for minimum security inmates and 58 percent for low security inmates from fiscal years 2009 through 2015. Relative to home confinement, use of RRCs grew at a slower pace for low security inmates and declined for minimum security inmates. DOJ has tracked some data on the cost implications of using incarceration alternatives, but could better measure their outcomes. For example, DOJ conducted a survey in 2014 and 2015 of U.S. Attorneys to obtain district-level information about the use of court-involved pretrial diversion practices. However, the data collected do not measure the outcomes or cost implications of the alternatives. For alternatives used at the end of inmates' sentences, BOP maintains data on the costs, such as average daily costs, of placing inmates in RRCs and home confinement. While BOP has measures in its strategic plan to monitor the use of RRCs and home confinement and has contracted for an analysis of its use of RRCs and home confinement that is expected to be completed during the summer of 2016, BOP, does not currently track the information needed to help measure the outcomes of these alternatives. By taking steps to obtain outcome data and developing performance measures for the alternatives used, DOJ and BOP would be better able to determine the extent to which the alternatives are achieving their goals and objectives and what adjustments may be necessary to make them more effective. GAO recommends that DOJ enhance its tracking of data on use of pretrial diversions and that DOJ and BOP obtain outcome data and develop measures for the alternatives used. DOJ concurred.
DOD’s medical mission is twofold in that it maintains a readiness mission and a benefits mission. The readiness mission requires DOD to maintain the needed availability of its uniformed medical personnel in order to support the armed forces during military operations. The benefits mission provides servicemembers, retirees, and their dependents with access to health care at its military hospitals and clinics throughout the United States and overseas. Military medical personnel are essential to maintaining DOD’s large and complex health system and are in great demand because of the need to treat injured or ill servicemembers and due to advances in medical technologies that require specialized personnel. They simultaneously support contingency operations, military operations that are more routine in nature, medical research efforts, and the delivery of beneficiary health care to patients across the globe. The management organization of DOD’s Military Health System comprises many levels. The Assistant Secretary of Defense for Health Affairs is the principal advisor for all DOD health policies, programs, and force health protection activities, and this official reports to the Under Secretary of Defense for Personnel and Readiness, who in turn reports to the Secretary of Defense. Health Affairs issues policies, procedures, and standards that govern DOD medical programs and has the authority to issue DOD instructions, publications, and directive-type memoranda that implement policy approved by the Secretary of Defense. It integrates the services’ submissions and prepares, presents, and justifies a unified medical budget that provides resources for the Military Health System. Health Affairs is also authorized to communicate directly with the heads of DOD components regarding these issues. Additionally, Health Affairs develops policies and standards to ensure effective and efficient results through the approved joint process for joint medical capabilities integration, clinical standardization, and operational validation of all medical material. The Secretaries of the Departments of the Army, Navy, and Air Force are responsible (subject to the authority, direction, and control of the Secretary of Defense) for the operation and efficiency of their departments. In addition, the service secretaries issue implementation instructions to their departments based on policies that Health Affairs develops. By law, the service secretaries are also responsible (again, subject to the authority, direction, and control of the Secretary of Defense) for promoting cooperation and coordination among the military departments and defense agencies to provide effective, efficient, and economical administration, and to eliminate duplication. The Army, Navy, and Air Force have their own Surgeons General who have overall responsibility for medical operations within their respective departments. Within the Army, the Army Surgeon General simultaneously heads the Army Medical Department and the Army Medical Command. In leading the Army Medical Department, the Surgeon General serves as the primary advisor to the Secretary of the Army on all health and medical issues. In addition, the Army Surgeon General has overall responsibility for the Armywide health services system to include development, policy direction, organization, and management of the system through such activities as recruiting, organizing, equipping, supplying, and training, as assigned by the Secretary of the Army. As the Commanding General of the Army Medical Command, the Surgeon General leads five regional medical commands and their fixed military treatment facilities, and other Army Medical Department agencies. The Navy Surgeon General serves as the Director of Naval Medicine and is the Chief of the Navy Bureau of Medicine and Surgery. As the director of Naval Medicine, the Surgeon General is the principal advisor to the Chief of Naval Operations on health care service programs for the Department of the Navy, and develops and issues health care policies and directions. As the chief of the Navy Bureau of Medicine and Surgery, the Surgeon General oversees the delivery of health care in the Navy and Marine Corps and commands the Navy shore medical facilities. The Air Force Surgeon General is that service’s most senior medical officer and head of the Air Force Medical Service. The Air Force Surgeon General is responsible for guidance, direction, and oversight for all matters pertaining to the formulation, review, and execution of plans, policies, programs, and budgets related to carrying out the mission of the Air Force Medical System to provide for the health care of Air Force personnel and their families. The service medical components contribute to the Military Health System missions by operating military treatment facilities throughout the United States and the world. These facilities consist of 59 hospitals capable of providing diagnostic, therapeutic, and inpatient care, as well as hundreds of clinics that primarily handle health screenings and ambulatory care. The Army, Navy, and Air Force staff their military treatment facilities with active duty, reserve, and civilian personnel. Contractors also play a role in the execution of the Military Health System mission by providing medical, clinical, and administrative staff and support services within both the military treatment facilities and the network of private hospitals and providers in the community. Reliance on contractors in the medical community varies by location and need. DOD is not required by law to include the number of medical contractors it employs in its annual Defense Manpower Requirements Report; therefore, the number of medical contractors onboard at any point in time is not readily available. DOD’s medical force is comprised of approximately 228,000 personnel, including about 116,000 active duty personnel, 67,000 reserve component personnel, and 45,000 civilians. As seen in figure 1, the distribution of the medical workforce is fairly proportional to the distribution of the total workforce for each of the three services. Although the personnel distribution varies by service, collectively the active duty and reserve workforces make up approximately 80 percent of the medical force, with the active duty comprising about 51 percent and the reserves 29 percent. Civilians comprise 20 percent of the medical workforce. In providing technical comments to a draft of this report, DOD noted that among the military services, the Army has the highest percentage of civilians. For example, within the Army Medical Command, 58 percent of its fiscal year 2011 medical workforce is projected to be comprised of Army civilians. According to the 2007 Military Health System Human Capital Strategic Plan, the medical workforce is comprised of several specialty medical corps, including Medical, Dental, Nurse, Medical Service, Medical Specialist, Biomedical Sciences, Veterinary, Warrant Officers, Medical Enlisted, and Dental Enlisted. This plan also states that the largest corps is the active duty Medical Enlisted Corps, which consists of about 75,000 individuals and makes up about 65 percent of DOD’s active duty medical force. Figure 2 represents the distribution of active duty medical personnel by specialty. A more detailed breakout of each of the services’ medical specialty personnel levels is presented in appendix II. That appendix shows, for fiscal year 2009, how each of the services allocated its positions within each of its medical specialties based on identified needs, financial resources, and personnel availability. While DOD has emphasized jointness and undertaken joint initiatives across the department, the extent to which the services have incorporated cross-service collaboration in their planning efforts for determining their medical personnel requirements has been limited. The 2007 Military Health System Human Capital Strategic Plan 2008-2013 emphasizes the importance of planning, coordinating, collaborating, and developing human capital solutions across the services to enable departmentwide decision making. Additionally, a DOD directive requires developing plans and procedures and pursuing common and cross-cutting modeling tools and data. Furthermore, DOD is moving toward having joint medical regions in which DOD-operated medical treatment facilities are staffed using personnel from across the service such as the consolidation of the military treatment facilities in the Washington, D.C., area. Also, DOD established a cross-service, baseline medical manpower standard for mental health providers, which was released in January 2010. While these efforts represent progress by the services in working collaboratively, the services have encountered challenges in their implementation. Issued in November 2007, DOD’s medical personnel strategic plan—the Military Health System Human Capital Strategic Plan 2008-2013— emphasizes coordination and collaboration across the services. This plan sets forth a vision, guiding principles, goals, and objectives for the management of the Military Health System’s medical personnel. The strategic plan articulates a vision of an interoperable and agile total medical force that meets the missions defined by National Security Strategy requirements. Emphasized throughout this strategic plan is the premise that the mission of the Military Health System can be better met by increasing emphasis on planning, coordinating, collaborating, and developing human capital solutions across the services. More specifically, this strategic plan states that the Military Health System cannot continue to recruit, develop, train, reward, and retain its workforce solely through each service independently, as mission requirements demand that they work together to achieve interoperability and agility. The 2007 Military Health System Human Capital Strategic Plan also aligns with critical areas on medical transformation initially presented in the April 2006 Quadrennial Defense Review Roadmap for Medical Transformation, which encouraged the Military Health System to create standardized processes, tools, and resources to improve efficiency and eliminate redundancies across the services. This goal is reiterated by a specific DOD directive requiring the services to maximize commonality, reuse, interoperability, efficiencies, and effectiveness of component- specific modeling data and tools. The Military Health System Strategic Plan is also cited in the 2010 Quadrennial Defense Review, which generally observes that DOD needs to reform the way in which it does business to address challenges—such as parochial interests and sometimes adversarial relationships within the Pentagon and with other parts of government—that are hindering its success. To eliminate redundancies in medical operations, integrate services, and achieve better economies of scale, DOD is implementing a joint medical effort in the National Capital Region of Washington, D.C., known as Joint Task Force National Capital Region Medical. This effort stems from a 2005 Base Realignment and Closure (BRAC) Commission recommendation to relocate patient care activities from the Walter Reed Army Medical Center Washington, D.C., to the National Naval Medical Center, Bethesda, Maryland, and to a new community hospital at Fort Belvoir, Virginia. The BRAC Commission presented its list of final recommendations to the President of the United States, which included a cost/savings estimate for this joint medical effort. The President approved the recommendations in their entirety and subsequently forwarded them to Congress, and they became effective in November 2005. Our analysis of DOD fiscal year 2010 BRAC budget showed that the cost to implement this realignment is estimated to be $2.4 billion, consisting primarily of $1.7 billion in construction costs. That analysis also showed that DOD projects its net annual recurring savings of this effort to be $172 million. In September 2007, the Deputy Secretary of Defense issued a memorandum that formally established Joint Task Force National Capital Region Medical. One of its two facilities, the new Walter Reed National Military Medical Center, will be located on the Bethesda campus, and according to the Deputy Secretary of Defense, is expected to deliver effective and efficient, world-class military health care, as well as consolidate and realign military health care in the region. Its medical services will include primary care, secondary care (that is, care provided by a consulting physician at the request of a primary physician), and tertiary care (that is, very specialized care performed by physicians with facilities and skills for special investigation and medical treatment). DOD plans to close the current Walter Reed Army Medical Center facility by September 2011. The second facility at Fort Belvoir, Virginia, is being expanded to provide comprehensive primary and secondary patient care services. Joint Task Force National Capital Region Medical’s vision, mission, and principles include as a key priority the establishment of common standards and processes, and calls for interoperability. According to a statement in the 2010 Comprehensive Master Plan for the Nation Capital Region Medical, this medical realignment represents a merger of nearly 10,000 healthcare and support staff. The document also states that the department has currently determined an active duty personnel distribution between the new Walter Reed National Military Medical Center in Bethesda and the Fort Belvoir Community Hospital, and that the services have identified the resources to meet the manning requirements. Joint Task Force National Capital Region Medical, which reached fully operational capability status on September 30, 2008, represents an important initiative within the Military Health System because, if successful, Joint Task Force officials believe it will be a model for the future of military medicine. Officials also noted to us that this joint medical effort in Washington, D.C., is a new process and, Joint Task Force officials are working with the services to work through details to achieve joint medical commands in the National Capital Region. Officials, however, have faced challenges in consolidating and realigning the medical manpower portion of this newly formed joint medical effort within the National Capital Region. Additionally, according to officials we spoke with, several assumptions used throughout the development of the joint manning document—that (1) the population served would remain static from 2004, (2) the clinical workload to be met would be based on that of 2004, and (3) the 2004 medical missions would remain constant— have become outdated. According to officials, the military treatment facilities in the National Capital Region have seen a significant increase in their clinical workload over 2004 levels as a result of injuries sustained by servicemembers following the acceleration in overseas operations in Iraq that was announced in 2007. Further, they said these injuries entail additional medical missions that the Joint Task Force officials have not been able to fully incorporate into the clinical workload or the personnel requirements determination. Such additional missions include an increased need for advanced limb and wound care, and traumatic brain injury care. Also, in order to develop the joint manning document for the newly formed and jointly staffed facilities, officials had to fuse the results of the services’ dissimilar medical personnel requirements determination processes. In doing so, they found that the services’ official manning documents contained inaccuracies. Several civilian and military Joint Task Force officials, who analyzed manpower documents to determine the levels of medical personnel currently on board for each service, told us that the services had employed civilian and contract personnel at their facilities but not recorded them on the manpower documents upon which these officials based the development of the joint manning document. For these various reasons, the joint task force officials have encountered significant challenges in developing an accurate, complete, and realistic joint manning document that lays out the medical requirements by specialty for the newly formed joint facilities. DOD officials attribute the problems to formative, early stage development issues, and acknowledged that, if service manpower determination processes had used similar language, nomenclatures, and approaches, the creation of the joint manning document would have been a more straightforward process. Officials also told us, however, that while the collaboration encountered to date has been challenging, it has been beneficial in building the relationship among the medical components and operational components of the services. These officials stated that with continued collaboration among the services and future operational experience, the Joint Task Force’s leadership intends to identify data- driven refinements to projected manpower requirements that would better capture efficiencies, enhance service quality, and build on selected strategic interests. A second joint medical personnel effort, quite different from that of the realignment previously described, is DOD and the services’ ongoing development and implementation of a cross-service medical manpower standard known as the Psychological Health Risk-Adjusted Model for Staffing (PHRAMS). PHRAMS represents the culmination of a collaborative manpower requirements effort to develop a standardized, more consistent approach across the services for determining mental health personnel requirements. Health Affairs sponsored the development of the cross-service PHRAMS manpower standard to address the growth in demand for mental health services, as well as to give the services a standard by which to develop mental health requirements needed to meet the common, day-to-day psychological health needs of eligible beneficiaries across the services. The model projects mental health medical requirements over a 5-year planning horizon and provides a gap analysis for the first year, in order to assist the services in addressing near- term personnel shortages. It also provides a consistent staffing standard containing several fixed parameters, such as the size of the beneficiary population and utilization rates, which Health Affairs will re-evaluate annually when the model is updated. Finally, the model contains variables that can change at the services’ discretion, such as the number of patients seen annually by a provider and an adjustment rate to reflect increased deployments for servicemembers in the hospital’s area of responsibility. Health Affairs released the final model to the services in January 2010. Currently, the Army, Navy, and Air Force are using PHRAMS to generate mental health staffing requirements at their military treatment facilities that are to be incorporated into the fiscal year 2012 budget submission later this year and because the model was only recently released to the services, the effect of its implementation on cost savings or requirement numbers is still unknown. Additionally, Health Affairs officials said that the services will continue to assess potential applications of PHRAMS. While the services are not specifically required to use PHRAMS or to develop more models, Health Affairs officials told us that the publishing of the Military Health System Human Capital Strategic Plan has encouraged dialogue among the services on collaboration, and such dialogue may facilitate the identification of further opportunities for development of manpower requirements models. To the extent that PHRAMS represents a positive collaborative initiative, to date it is the only model of its kind. The services are responsible for organizing, equipping, and training their respective forces, and service officials assert that their respective needs are sufficiently different to warrant maintaining service-unique processes for requirement determination. While each of the services has unique operational medical capabilities, such as Army veterinary medicine, Navy undersea medicine, and Air Force aerospace medicine, the day-to-day operations at military medical treatment facilities are very similar across the services, and they could advantageously be more collaboratively managed. A DOD directive requires the respective heads of the services to maximize the commonality, reuse, interoperability, efficiencies, and effectiveness of component-specific modeling data and tools, but Health Affairs officials said that no other current collaboration efforts for determining medical personnel requirements or developing medical manpower standards, other than PHRAMS, are currently under way. Committed and effective leadership is a critical aspect of enhancing collaboration. Committed leadership by those involved in collaborative efforts from all levels of the organization is needed to overcome the many barriers to working across boundaries. Key organizational issues, like strategic workforce planning, are most likely to succeed if, at their outset, top program and human capital leaders set the direction, pace, and tone and provide a clear, consistent rationale for the transformation. With leadership emphasis and expectations that the services will continue to explore opportunities to develop cross-service medical manpower standards, such as PHRAMS, and consistent management focus on collaboration within DOD’s Military Health System, the services will have more opportunities to develop collaborative work force planning efforts for common medical capabilities that they share throughout their military treatment facilities—an approach that is consistent with the Military Health System Human Capital Strategic Plan’s vision of a more integrated approach across service lines. While a need exists for the services to work more collaboratively to determine their medical personnel requirements, the services’ also maintain processes to address service-specific needs. In accordance with a DOD directive, personnel requirements are to be established according to workload at the minimum levels necessary to accomplish mission and performance objectives. Additionally, a DOD instruction calls for the models and associated data used to support DOD processes and decisions to be validated and verified throughout their life cycles, and accredited for the model’s intended purpose. While all of the services currently are taking steps to update and refine their medical personnel requirement processes, these processes, however, are not yet fully validated or verifiable. Further, the services do not centrally manage their civilian medical personnel requirements. The Army uses its Automated Staffing Assessment Model to determine manpower requirements for Army fixed military treatment facility and other Army Medical Command organizations. This model is based primarily on approved population and workload data, but it also incorporates industry performance data to determine manpower requirements for the various medical specialties. The Automated Staffing Assessment Model consists of over 240 modules for determining essential medical requirements for many medical specialties such as physicians, nurses, dentists, medical service corps, and veterinarians, to name a few, at the work center level across Army fixed military treatment facilities. The model uses the current population of the various military treatment facilities as the major determinant of the number of medical personnel needed at each facility. In addition, a number of key, workload-based assumptions inform the model, including patient care hours, population projections, provider-to-patient ratio, and provider-to-support technician ratio. However, in certain cases, our analyses of selected modules revealed areas that need improvement. For example, our analyses of the inpatient nursing and dental modules revealed the use of some obsolete assumptions. Specifically, we found that the Army’s nursing requirements module had not been updated or used since 2005 to determine nursing requirements. Further, according to dental command officials, the dental module in use is an Army legacy model that is over 40 years old and does not reflect the more advanced level of dental care currently being provided, such as the increased need for complex dental repair work rather than simple extractions. DOD noted in technical comments on a draft of this report that the nursing and dental modules were recently updated and submitted for validation. According to Army officials, updates to Army medical manpower models are subject to a review process by the U.S. Army Manpower Analysis Agency, and to final approval by the Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs. A module can be approved for 3 years if it is determined to be logical, analytical, verifiable, and based on accurate data sources. However, if a module is based solely on data provided by subject matter experts and functional estimates of the primary tasks associated with the specialty, the model will be approved for 1 year—as is the case for the recently validated veterinary specialty module. According to Army officials, prior to 2008, the Army required a random sample of 2 percent of the requirements models to be validated for reasonableness; however, currently, it uses a more stringent approach that requires all models to be validated. Army documents show that the Army’s manpower analysis agency completed validation of 4 of the 240 modules in 2009 and 2 more so far in 2010. In addition, 12 more modules have either been submitted for review and approval or are nearing submission. In technical comments to a draft of this report, DOD noted that the Army believes the number of requirements covered by its staffing assessment model is more important than the number of modules as we have discussed. As such, the Army noted that nearly 20 percent of its medical personnel requirements have been updated and about another 20 percent of its requirements have been submitted for validation however are pending approvals. Moreover, Army Medical Command officials have been working with representatives from the Army Manpower Analysis Agency to develop a specific time line and priorities for validation of the remaining modules, but currently no definitive schedule has been set yet for completing the validation. Army officials recognized that the approach to model validation that they had been using, including its previous reliance on sampling methods, was not providing the Army with complete and sufficient information. With committed and sustained leadership emphasis to complete and maintain the validation of all the modules, the Army will be in a better position to be certain it is determining its medical personnel requirements in an effective and efficient manner. The Navy has not used a model to determine the medical personnel requirements for its fixed military treatment facilities. Instead, Navy officials explained that, the Navy’s process is to use current manning as a baseline and adjust the figure based on emerging needs or major changes in its medical mission. Additionally, Navy officials explained that local military treatment facility commanders prepare annual business plans for their medical facilities and include proposed changes to the facilities’ personnel requirements based on such information as enrolled population, utilization rates, and on expert functional knowledge at the military treatment facility. These business case analyses are then submitted and reviewed through the chain of command and approved by the Navy Surgeon General as medical resources allow. While the Navy routinely employs this approach to determine its medical personnel requirements, it is not a validated or verified methodology as required by DOD guidance. To better assess its medical personnel requirement needs at the medical specialty level, the Navy is beginning to develop medical manpower standards which officials indicate will be used as the basis for future requirements determination. According to Navy officials, they plan to use the Navy Medicine Benchmark Model for its 93 medical functional areas. As this model will determine the benchmark for the number of personnel needed in a medical specialty at a military treatment facility, the model will be used to identify surpluses or shortages in personnel at each facility and identifying the optimal military, civilian, and contractor mix. DOD noted in technical comments responding to a draft of this report that the Navy Bureau of Medicine and Surgery Headquarters is the approval authority for determining whether a medical personnel requirements model or process is valid and verifiable. Navy medical officials explained that they are still in the process of determining the model’s validity for each of its medical specialty areas, and they did not provide a time schedule as to when this would be completed. Although the Navy is implementing this model to help determine its medical personnel requirements, Navy officials asserted to us that the Navy does not have any unmet requirements, as it uses private-sector medical care when military treatment facilities are unable to provide the care. Navy officials recognized the business case analysis process did not provide the validated and verifiable approach needed to determine their medical manpower requirements. With committed and sustained leadership emphasis to implement and maintain a fully validated benchmark model, the Navy similarly will be in a better position to be certain it is determining its medical personnel requirements in an effective and efficient manner. In 2002, the Air Force Surgeon General collaborated with the private sector to design the Product Line Analysis and Transformation Tool that produced medical manpower staffing models utilizing industry standards and research and the experiences of Air Force medical personnel. While the models were presented in 2003 for validation and approval, the Air Force leadership did not approve this model for determining manpower standards for its medical specialties because the models were not based on objectively quantifiable data sources. Although the Air Force considered any medical requirements developed using the model as unverifiable, it allowed Air Force medical officials to continue to use the models as a part of its requirements determination process. Currently, Air Force medical officials use, in addition to the model, historical workload, historical and like-size facility manning, industry models, functional models, and statistical analysis of variance by facility to generate their medical personnel requirements. The current requirements development process can be performed using either a top-down or a bottom-up approach. The top-down approach begins with Air Force leadership, usually at the rank of general, determining that a military treatment facility has a need for new requirements. The bottom-up approach occurs when officials at a military treatment facility identify a need for a new requirement and then work through the major commands to change or alter its current requirements. The major commands then work with the Air Force Medical Operations Agency to bring a request for new or changed requirements to the Air Force Surgeon General. The new or changed requirements undergo a vetting process that ranges from the military treatment facility to the Chief of Staff before they are approved. Any changes to requirements are based on identified need as experts in functional areas obtain new data or refined standards. To establish the feasibility of providing a verifiable means of medical manpower standards development support to the Air Force medical community, the Air Force Medical Service and the Air Force Manpower Agency signed a Memorandum of Agreement whereby the Air Force Manpower Agency will develop new manpower standards for all Air Force medical specialties, based on data that have been collected for each. According to officials, this effort began in January 2010, and they hope to have completed developing all of the manpower standards by 2015. In order to do so, the Air Force Manpower Agency is planning to hire 15 officials—10 civilians and 5 military—to research, develop, and validate the new manpower standards. This effort will include such tasks as developing the data collection approach, performing the analysis on all of the data, developing the manpower models, and identifying process improvement opportunities. Air Force officials recognized that their recent efforts to develop medical manpower standards stem from the Air Force’s need for a validated and verifiable manpower requirements determination process. With committed and sustained leadership emphasis on maintaining validated medical manpower models, the Air Force is in a similar position as the other services in that it would be in a better position to know its true medical needs by medical specialty and to be certain it is determining its medical personnel requirements in a more effective and efficient manner. DOD’s efforts to determine its medical personnel requirements at military treatment facilities are further limited by the fact that the services have not fully incorporated into their requirements processes the use of civilians who deliver health care at the same stage in the process where they determine their military medical personnel requirements. A DOD directive requires that, for areas employing both military and civilian personnel, manpower requirements shall be determined in total and designated as either military or civilian, but not both, as an active, reserve, or civilian determination must be made for each requirement. The Military Health System Human Capital Strategic Plan also asserts that more efforts should be made to have the optimal mix of medical personnel. However, while civilian personnel constitute about 20 percent of the services’ medical workforce, the services’ current requirements processes are generic in nature and do not differentiate positions as military or civilian. We found that all three services first determine their collective requirements. Then, at the local level, after all of the positions at a military treatment facility are staffed with the available military personnel, the commander of the local military treatment facility determines whether a position will be designated as civilian or contractor. In making determinations to use civilian personnel, local commanders use several factors, such as whether the position is military essential—to support readiness or operational missions—or inherently governmental— which would require the position be filled with a government employee. Additionally, commanders consider financial resources and the availability of civilian or contractor personnel in the local area. In technical comments provided in response to a draft of this report, DOD officials disagreed with our statement that the services do not centrally account for civilian personnel requirements. DOD noted that workload generated by civilians is captured and depicted in a centralized information management system. However, based on the explanation of this system given by DOD, we note that this system captures the number of civilian personnel already on board and the areas in which they are employed. It does not identify the number of civilian personnel needed and required by each service to meet the missions of fixed military treatment facilities, nor does it centrally account for civilian personnel requirements. In addition, several military treatment facility personnel told us that more direction or centralized guidance would aid them, in many cases, in their management of their civilian personnel. DOD’s 2009 update to its Civilian Human Capital Strategic Plan lists global civilian end strength numbers for five mission critical medical occupational series—medical officers, nurses, pharmacists, clinical psychologists, and licensed clinical social workers. This update also gives projected accession and recruiting goals needed to reach those global end strength numbers. However, the update does not project any civilian end strength numbers at the medical specialty levels within these occupational series nor does it indicate the military treatment facilities at which these civilians are needed. If the services do not identify civilian personnel requirements for military treatment facilities in the overall requirements planning process, the services may be missing the opportunity to make a strategic determination of how many medical professionals—military or civilian—are needed in total to carry out their expected missions and workloads. The services assume added risk if their medical requirements are not completely met, and if the requirements are unknown, the extent of that risk cannot be estimated. If risk is unknown, the services cannot develop appropriate risk-mitigation strategies for their unmet medical personnel requirements. To achieve a military health system that can respond to our country’s changing national security needs by using both the right numbers and the right mix of forces, DOD has emphasized the need for collaboration of efforts in the medical arena, and committed and sustained leadership emphasis is key to successful collaboration. The efforts taken to date by OSD and the services to develop and implement specific cross-service manpower related programs have been a step in the right direction for building a collaborative approach to determining military medical personnel requirements. As such, it is important that the services continue to focus on developing programs, solutions, and measures for managing medical personnel requirements across the services and focus on the long- term, broader picture. By doing so, OSD and the services will have more opportunities to create departmentwide benefits and would more fully support the Military Health System’s strategic planning goal of collaboration. Also, as the services work toward a joint approach, it is important for them to have sound medical personnel requirement determination processes in place, to enable them to identify the personnel numbers and mix they need to fully perform their medical missions. If the services are to effectively and efficiently provide daily care to active duty and retired servicemembers and their dependents in their fixed medical facilities, it is important that each of their medical personnel requirement processes reflects currency, validation, and verification. Areas of improvement exist within the services’ medical requirements processes, and until these processes are up-to-date, fully validated, and verifiable, it is not clear whether the services can be certain they are determining their medical personnel requirements in an effective and efficient manner. Consistent with DOD emphasis on developing human capital solutions across the services to enable departmentwide decision making and analyses within its Military Health System, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs and the Service Secretaries to take the following two actions. Identify the common medical capabilities that are shared across the services in their military treatment facilities that would benefit from the development of cross-service medical manpower standards; and Where applicable, develop and implement cross-service medical manpower standards for those common medical capabilities. To improve the Army’s current medical personnel requirements determination process, we recommend that the Secretary of the Army direct the Army Surgeon General to take the following three actions. Update assumptions and other key data elements contained within specialty modules of the Automated Staffing Assessment Model; Develop and implement a definitive revalidation schedule for the specialty modules of the Automated Staffing Assessment Model; and Include its reliance on civilian medical personnel in its assumptions as it updates and validates their medical personnel requirements determination modules. To improve the Navy’s current medical personnel requirements determination process, we recommend that the Secretary of the Navy direct the Navy Surgeon General to take the following two actions. Develop a validated and verifiable process to determine its medical manpower requirements; and Include its reliance on civilian medical personnel in its assumptions as it develops, and then validates, its medical personnel requirements determination model. To improve the Air Force’s current medical personnel requirements determination process, we recommend that the Secretary of the Air Force direct the Air Force Surgeon General to take the following two actions. Develop a validated and verifiable process to determine its medical manpower requirements; and Include its reliance on civilian medical personnel in its assumptions as it develops, and then validates, its medical personnel requirements determination model. In written comments provided in response to a draft of this report, DOD concurred or partially concurred with all of our recommendations. DOD’s written comments are reprinted in appendix III of this report. Additionally, DOD provided technical comments that we have incorporated where appropriate. In concurring with our recommendations regarding identifying, developing, and implementing cross-service medical manpower standards for medical capabilities that are shared across the services, DOD noted that a cost-benefit analysis must precede a review of shared capabilities to ensure that there is a significant, measurable benefit in cost, quality, or access to medical care before department medical funds are expended. We agree that this course of action would constitute a reasonable part of a process to identify which specialties would benefit from such efforts. In concurring with our recommendations to improve the Army’s current medical personnel requirements determination process by updating assumptions, developing and implementing a revalidation schedule, and including its reliance on civilian medical personnel in its assumptions, DOD stated that the Army will continue to update assumptions and other key data elements within the Army Automated Staffing Assessment Model as our recommendation suggested and will closely coordinate efforts between Army Medical Command and the U.S. Army Manpower Analysis Agency to implement a revalidation schedule for the medical personnel requirements determination models. DOD further noted in its response to a draft of this report that the Army will continue to capture civilian contribution to the generation of medical workload in its Automated Staffing Assessment Model, and that 58 percent of Army Medical Command’s workforce is civilian. Although we believe Army’s efforts to capture civilian contribution is important to understanding its workforce, the intent of our recommendation is for the Army to better delineate military versus civilian personnel requirements during the requirements determination process as called for in DOD Directive 1100.4. In its partial concurrence with our recommendations for the Navy to develop a validated and verifiable process to determine its medical manpower requirements and to include its reliance upon civilian medical personnel in its assumptions, DOD noted that the Navy initiated a comprehensive effort to redefine how medical manpower requirements are determined, the results of which are expected by fall 2010. We note this effort in our report, and it is in line with the intent of our recommendation, but we continue to assert the need for this effort to be completed. Further, DOD noted that the Navy Surgeon General has always taken and will continue to emphasize a total force approach in future planning and programming for medical personnel. We note, however, that while we recognize the value of such an approach, our recommendation concerns, as with the Army, the need for the Navy to delineate military versus civilian personnel requirements during the requirements determination process as called for in DOD Directive 1100.4. In concurring with our recommendations that the Air Force Surgeon General develop a validated and verifiable process to determine medical manpower requirements and include its reliance on civilian medical personnel in its assumptions, DOD noted that the Air Force is in the process of developing new manpower standards for its medical specialties, having finalized a Memorandum of Agreement between the Air Force Medical Service and Air Force Manpower Agency in May 2010. We note the potential of this effort as a strong step toward fulfilling this recommendation. Further, DOD noted that the new Air Force manpower standards will include the identification of civilian equivalents for those positions not deemed military essential, and that civilian requirements are also reviewed and determined through the Inherently Governmental / Commercial Activity process. We agree that Air Force’s new medical requirements determination standards, to include civilians, will have the potential to address the intent of our recommendation. The Inherently Governmental / Commercial Activity process, however, does not completely address the need to delineate military versus civilian personnel requirements during the requirements determination process as our recommendation suggests and as called for in DOD Directive 1100.4. Additionally, one of DOD’s technical comments concerns our recommendations regarding the services’ need to include their reliance on civilian medical personnel in their assumptions when developing and validating their medical personnel requirements determination models. In this technical comment, DOD suggested that we delete the section of our report headed by the statement “The Services Do Not Centrally Account for Civilian Personnel Requirements.” DOD noted that all three services use a reporting system that captures and depicts workload generated by civilians in a centralized information management system. However, we note that the workload generated by civilians constitutes an after-the-fact status of assignments rather than a consideration in generating the requirements before these civilians are assigned to fill a requirement. Thus, we continue to believe the validity of our aforementioned heading reflecting our findings in this area has merit. Finally, DOD provided in its technical comments to a draft of this report a table that they believe illustrates recent collaborative efforts. Two of the six examples—Psychological Health Risk-Adjusted Model for Staffing and Joint Task Force National Capital Region Medical—are discussed extensively in this report. DOD noted four more examples to illustrate recent collaborative efforts, such as proposed legislation for financial assistance to provide scholarships to civilian medical providers, that we did not include in our report because we believe that these examples are not directly related to the development of cross-service manpower standards or medical personnel requirements, which is the focus of this report. We have, however, reprinted DOD’s table in appendix III. We are sending copies of this report to the Secretary of Defense and the Secretaries of the Army, Navy, and Air Force. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on the information discussed in this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This engagement examines the processes used by the military services to determine their medical personnel requirements for staffing, to include the number and specialty mix of military and civilian employees, at fixed medical treatment facilities. We interviewed officials and, where appropriate, obtained documentation at the following locations: Office of the Assistant Secretary of Defense for Health Affairs, Army Medical Command, San Antonio, Texas; United States Army Manpower Analysis Agency, Fort Belvoir, Virginia; Brooke Army Medical Center, San Antonio, Texas; Navy Bureau of Medicine and Surgery, Washington, D.C.; Navy Medical Support Group, Jacksonville, Florida; Naval Medical Center Portsmouth, Portsmouth, Virginia; Air Force Medical Service, Washington, D.C.; Air Force Manpower Agency, San Antonio, Texas; and, 12th Medical Group—Randolph Air Force Base Clinic, San Antonio, Texas. To evaluate the extent to which the services have collaborated in their strategic planning efforts for the determination of their medical personnel requirements, we reviewed manpower, personnel, and Military Health System policies and plans for the Department of Defense and the services. Especially pertinent were Department of Defense Directive 5000.59, on Modeling and Simulation management, and the Military Health System Human Capital Strategic Plan for Fiscal Years 2008 - 2013. We compared the guidance, goals, and strategies in those documents with the ongoing medical personnel requirements determination processes used by the services, which we determined by analyzing documentation and interviewing officials from each of the locations listed. We also analyzed documentation and interviewed officials from Joint Task Force National Capital Region Medical and the San Antonio Military Medical Center to learn about joint medical operations that are being developed and implemented. Further, we met with officials from the Center for Naval Analyses who are currently working under a contract with the Office of the Assistant Secretary of Defense for Health Affairs to develop a cross- service medical manpower standard for behavioral health specialties known as the Psychological Health Risk-Adjusted Model for Staffing. To evaluate the service-specific processes for determining their requirements for military and civilian medical personnel, we reviewed documentation provided to us by officials, whom we then interviewed, from each of the offices previously cited. We obtained and reviewed the Army’s Automated Staffing Assessment Model for four medical specialties: physicians, dentists, nurses, and mental health care. We interviewed agency officials who operate the models for each of these specialties to understand how these models are used, how accurate the data are, and whether the models had been validated by the Army’s Manpower Analysis Agency. We additionally interviewed officials from the Navy Bureau of Medicine and Surgery and the Air Force Medical Service regarding the processes they use to determine their medical manpower requirements. We also collected data on medical personnel requirements, authorized positions, and end strengths for fiscal year 2009 from each of the services’ medical departments and from the Defense Manpower Data Center’s Health Manpower Statistics Report. The Army is the only service that provided service-specific data, while the Air Force and Navy deferred to the Defense Manpower Data Center’s Health Manpower Statistics Report. We coordinated our analysis and our results with a methodologist from GAO’s Applied Research and Methods team. Additionally, with guidance from the methodologist, we also evaluated the reliability of the data we obtained and found it sufficiently reliable for the purposes of this audit. We conducted this performance audit from August 2009 through July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following data show the results of service-specific medical personnel requirement processes (where available) in comparison with funded and filled positions. In addition to the individual named above, David Moser (Assistant Director), Rebecca Beale, Chaneé Gaskin, Randy Neice, Cheryl Weissman, Michael Willems, and Elizabeth Wood made key contributions to this report.
Military medical personnel, who are essential to maintaining one of the largest and most complex health systems in the nation, are in great demand due to the need to treat injured or ill servicemembers, and advances in technology that require specialized personnel. To determine how well the Department of Defense (DOD) and the services are developing their medical and dental personnel requirements, GAO evaluated (1) the extent to which the services have incorporated cross-service collaboration in their medical personnel requirement processes, and (2) the service-specific processes for determining their requirements for military and civilian medical personnel. To conduct this review, GAO evaluated manpower policies, analyzed the services' requirements data and determination processes, and interviewed officials from the Office of the Secretary of Defense (OSD) and each of the services. While DOD's 2007 Military Health System Human Capital Strategic Plan emphasizes developing human capital solutions across the services to enable departmentwide decision making and analyses, the services' collaborative planning efforts regarding requirements determination for medical personnel working in fixed military treatment facilities have been limited. In one effort to integrate operations, DOD is consolidating medical facilities in the Washington, D.C., area under a joint task force that calls for joint staffing of the military treatment facilities in the region. However, officials have faced challenges in developing the manpower requirements for the joint facilities due to the use of outdated planning assumptions. Separately, the Office of the Secretary of Defense (OSD) sponsored another joint medical effort to develop a cross-service medical manpower standard for mental health personnel. This standard is being used to determine the amount of personnel needed to meet common, day-to-day psychological health needs of eligible beneficiaries across the services. However, to date, this standard is the only one of its kind, and OSD officials said that no other similar efforts currently exist. The services' continued focus on separate medical personnel requirements processes may not be consistent with the DOD strategic plan's vision of a more integrated approach, and the services may have missed opportunities to collaborate and develop cross-service manpower standards for common medical capabilities that are shared across military treatment facilities. Sustained and committed leadership emphasis on developing more effective ways of doing business, such as the use of cross-service medical manpower standards, is key to successful, collaborative human capital strategic planning. To the extent that the services need to maintain separate processes, GAO also found that their requirements processes are not, in all cases, validated and verifiable, as DOD policy requires. Selected specialty modules in the Army's model contain some outdated assumptions, such as the level of care currently being provided, and only a portion of the modules have been completely validated. While the Navy has employed an approach that uses current manning as a baseline and adjusts its requirements based on emerging needs or major changes to missions, the approach is not validated or verified as required by DOD guidance. The Air Force said it may not know its true medical requirements as the model it has relied on also is not currently validated or verified. Each of the services has recognized the need to have processes that can be validated and verified, and has taken steps to address these issues in recent years. However, without processes that are validated and verifiable, the services cannot be certain they are determining their medical personnel requirements in the most effective and efficient manner. Also, the services do not centrally manage their processes for their civilian medical personnel requirements. While local commanders determine these requirements, the services may be missing the opportunity to make a strategic determination of how many civilian medical professionals are needed to carry out their expected workloads. GAO recommends that OSD and the services emphasize a long-term joint approach to medical personnel requirements determination by identifying the common medical capabilities shared across the services and developing cross-service medical manpower standards, where applicable; and that the services take actions to improve their respective medical requirements determination processes. In written comments to a draft of this report, DOD generally concurred with these recommendations.
A dominant theme of the commercial airline industry in the United States and the EU in the past 2 decades has been one of decreased government economic regulation. This development began in the United States with passage of the Airline Deregulation Act of 1978, phasing out federal regulation of rates, routes, and services for domestic airlines. EU aviation deregulation began in 1987 and led to the creation of a single European aviation market. In 1993, the EU efforts mirrored U.S. deregulation by removing all government restrictions on routes, fares, and capacity, as well as barriers to cross-border investment of European airlines. By 1997, the EU removed the final operating restriction by allowing cabotage within the EU. Deregulation has allowed substantial growth in both U.S. and EU airline operations and passenger traffic, with consumers on both sides of the Atlantic benefiting from decreased fares and increased service. As airline operations and passenger traffic grew, U.S. and EU aviation industry employment increased as well (see table 1). For many decades, international air service has been governed by aviation agreements that are based on the principle that nations have sovereignty over their airspace. This sovereignty is defined by nine “freedoms of the air” that have developed over time to outline possible aviation rights between countries. During a 1944 international civil aviation convention in Chicago, the participating countries decided that international aviation would be governed by negotiated bilateral aviation agreements that specify “traffic rights,” such as the number of airlines that can operate between markets, the airports from and to which they operate, the number of flights that can be provided, and the fares that airlines could charge. These aviation rights, including the right to prevent foreign airlines from cabotage operations, have been the basis for international aviation. Under traditional bilateral agreements, air services can only be offered by airlines that are licensed and designated by the two countries that sign the agreement. To be licensed to provide commercial air services, an airline must meet various legal and regulatory requirements. Among these requirements are citizenship and control tests, which require that an airline be majority-owned and effectively controlled by citizens of the licensing country. In the United States, the airline must also meet economic fitness and safety requirements. EU law establishes a framework for the granting of airline licenses and air operators certificates, but all Community airlines licensed by EU member states in accordance with EU law are permitted to provide transport throughout the EU. The process by which countries indicate which airlines are authorized to provides service under the agreements is called “designation.” Designation has traditionally indicated that the country making the designations will ensure appropriate regulatory oversight. This responsibility extends to ensuring that the airline complies with international civil aviation safety and maintenance standards. Open Skies agreements are a particular kind of bilateral agreement. They remove the vast majority of restrictions on how airlines of the two countries signing the agreement (signatory countries) may operate between their respective territories. For example, they remove prohibitions on the routes that airlines of the signatory countries can fly, or the number of airlines that can fly them. These expanded operational rights represent significant alterations to the traditionally more restrictive bilateral agreements that specified service frequency, capacity, routing, and pricing. While they granted more rights to airlines of the signatory countries, Open Skies agreements, through the nationality clause, allow the U.S. government to block airlines of other countries from these rights. For example, while both Germany and France have Open Skies agreements with the United States, the German-based carrier Lufthansa is not permitted by either France or the United States to operate flights between France and the United States, without it being a continuation of a flight that originates in Germany. Yet according to DOT officials, if it is deemed “not inimical” to U.S. interests, DOT can waive the ownership and control requirements. For example, DOT officials stated that, under the multilateral Open Skies agreement signed with Brunei, Chile, New Zealand, and Singapore, it applied a more flexible definition of the nationality clause for nations covered by the agreement and focused on ensuring that the airlines covered by that agreement are “effectively controlled” by nations that signed the agreement. Table 2 summarizes some of the key differences between traditional bilateral agreements and Open Skies agreements. The U.S.–EU market grew from 28 million annual passengers in 1990 to over 51 million passengers by 2000, representing the most important international market for U.S. airlines. British Airways is the largest carrier in the U.S.–EU market, followed by American, Delta, and United Airlines (see fig. 1). Neither U.S. nor EU low-cost carriers currently offer transatlantic services. Consumers and airlines have benefited from Open Skies agreements that the United States has signed with 15 individual EU member nations. The number of such agreements has grown over time, although 10 EU member nations, including the largest U.S. aviation partner, the United Kingdom, still have more restrictive bilateral agreements or no agreement at all. Open Skies facilitated the formation of more integrated international alliances between U.S. and EU airlines, which allowed the airlines to expand their networks and provide competitive service for more passengers to more locations at cheaper fares. As a result, U.S. passengers have been able to pay less to reach most EU destinations, significantly increasing passenger traffic. Since signing the first Open Skies agreement with the Netherlands in 1992, the United States has entered into agreements with 15 of the 25 EU nations (see fig. 2). The United States signed nine of these agreements by 1996. Since then, the United States has signed Open Skies agreements with six EU member states: Italy, Malta, Poland, Slovakia, Portugal, and France. While the majority of EU member states have signed Open Skies agreements, 10 EU member states maintain bilateral agreements that are more restrictive than Open Skies agreements or have no aviation agreement with the United States. The United States does not have any aviation agreement with Cyprus, Estonia, Latvia, Lithuania, and Slovenia. EU member states that have traditional bilateral agreements include Greece, Ireland, Spain, Hungary, and the United Kingdom. For the five countries with bilateral agreements but without Open Skies, the types of restrictions vary from agreement to agreement. For example: The U.S.–Spain agreement does not permit U.S. airlines to code-share with any of their EU partners from intermediate points elsewhere in Europe. For example, United Airlines cannot place its code on any Lufthansa flight from Germany to Spain. The resulting “interline” service tends to be both more expensive and more inconvenient than code-shared routes, placing them at a competitive disadvantage. The agreement with the United Kingdom, commonly referred to as Bermuda 2, restricts service between the United States and London’s Heathrow airport to two airlines from each country—at present, American and United from the United States, and British Airways and Virgin Atlantic from the United Kingdom. In addition, the agreement limits nonstop service into Heathrow by U.S. airlines to 12 specified U.S. cities. UK airlines can operate from Heathrow to 11 specific cities, plus other cities where there is no U.S. airline competitor. Despite these restrictions, London’s Heathrow airport (Heathrow) accounted for the highest percentage (over 20 percent) of U.S.–EU passengers of any European airport between 1990 and 2002. Open Skies agreements greatly changed how U.S. and EU airlines provide international service. The change centers on the alliances that various U.S. and EU airlines have formed with each other. Operating in an alliance allows an airline to greatly expand its service network, without having to increase the number of routes it flies using its own aircraft. In the simplest case, an international code-sharing alliance links the route network of one airline with the route network of another, forming an end-to-end alliance with little overlap (see fig. 3). In this way, alliances have allowed airlines to expand the number of markets that received “on-line” service between the U.S. and EU. Airline passengers prefer this type of “seamless” service, compared to interline service, because it allows the convenience of single ticketing and check-in, among other things. Alliances greatly increase the number of markets that can be served on-line because they connect locations that were otherwise served only by one of the alliance airlines. This concept, illustrated in figure 3, allows networks to serve “behind-and-beyond” markets. Transatlantic flight occurs between what are called “gateway” airports, such as Atlanta and Paris. A “behind” point is a location that feeds passenger traffic into the gateway airport on one side of the Atlantic, while “beyond” points are those destinations that can be reached once a passenger has traveled to the gateway airport on the other side of the Atlantic. For example, Kansas City, Missouri, and Berlin, Germany, constitute a “behind-and-beyond” market. Neither city has nonstop transatlantic service, so passengers from either destination must first fly to a gateway airport. A passenger originating a trip in Kansas City would have to take a flight into a gateway airport (such as Atlanta), connect to a transatlantic flight to an EU gateway (such as Paris), and then connect onto a flight to Berlin. Most major U.S. airlines that provide transatlantic service (American, Delta, United, Northwest, US Airways, and Continental) belong to international alliances with other airlines, including many from the EU. To more closely integrate scheduling and pricing, alliance partners may request that they be given immunity from national antitrust laws, which would otherwise prohibit potential competitors (i.e., the alliance partners) from coordinating pricing and services. DOT has granted antitrust immunity to most of the alliances that U.S. airlines have with EU airlines. Beginning with Northwest and KLM Royal Dutch Airline (KLM) in 1993, DOT approved antitrust immunity for U.S. airlines with 18 international alliance partners. Yet not all alliances have received antitrust immunity. U.S. policy stipulates that only airlines from countries that have signed Open Skies agreements with the United States can receive antitrust immunity. The efforts to obtain antitrust immunity for an alliance between American and British Airways has twice failed, in part because the United States was unable to obtain an Open Skies agreement with the United Kingdom and the airlines were not willing to cede Heathrow slots as required by competition authorities. American and British Airways are limited in the number of markets in which they can code-share, and are not permitted to coordinate market scheduling and pricing in the same way as other airlines that do have antitrust immunity. (See appendix IV for summary information of the major international alliances.) Various studies have found that the alliances and expanded networks created since the first Open Skies agreements have produced significant benefits for consumers. Two studies conducted by DOT found that the development of alliances in transatlantic markets led to consumer benefits in the form of more competitive service and more extensive networks. We found that international network airlines serve the majority of U.S.-EU city-pair markets with no worse than double-connection (i.e., two-stop) on-line service. Based on scheduled flights for May 2004, 83 percent of the possible U.S.–EU markets (5,165 of 6,210) were scheduled to receive on-line service with nonstop, single-connection or double-connection service. More than half of those markets were served by nonstop or single-connection flights. Table 3 summarizes the connectivity of major U.S.-EU markets. (Additional markets may also have received on-line service, but the service would have required more than two connections and would thus be excluded from our analysis.) In addition, consumers in most U.S.-EU markets have a choice of service from more than one competing airline or alliance. Figure 4 illustrates that consumers flying between Kansas City and Berlin have four different competitive alternatives. In the 174 nonstop markets, 71 percent have at least three airlines providing either nonstop service or competitive single-stop service. In markets where the best level of service is one-stop or on-line single connections, over 85 percent have at least three competitors, and in markets where the best level of service involves two connections, 60 percent have three or more competitors (see fig. 5). Between 1996 and 1999, according to DOT, within Open Skies countries, fares dropped an average of 20 percent, compared to a 10 percent fare decrease in non-Open Skies markets (see table 4). These differences are consistent across the various categories of markets, such as gateway-to-gateway markets and behind-beyond markets. Much of the decrease has been attributed to the incentive for alliances to offer lower-priced on-line service rather than the higher-priced interline connecting service. DOT officials noted, however, that little if any analysis of changes in airfares and service has been completed that would examine any changes in the market since September 2001. Industry and government officials with whom we spoke generally said that the Court of Justice decision–particularly as it relates to likely changes in the existing nationality clause in the Open Skies agreements and the bilateral agreement with the United Kingdom–will affect commercial aviation in at least four key ways, depending on the eventual outcome of negotiations between the United States and EU. These experts agree that complying with the Court of Justice decision will require that nationality-based restrictions be eliminated. The four key areas raised by potential changes to the nationality clause are closely intertwined and are as follows: A new U.S.–EU agreement that would address the nationality clause issue would likely supersede the five existing restricted bilateral agreements and also would become effective in the five EU nations where no agreement currently exists. It would thus provide U.S. airlines with expanded legal traffic rights (i.e., rights to operate between two destinations) into what are now restricted markets. However, capacity limitations at certain key airports might restrict U.S. airlines’ ability to exercise this new right. Eliminating the nationality clause restrictions means that the United States would recognize all EU airlines as “European Community” airlines. These airlines, which currently can provide transatlantic passenger service only between the United States and airports in their own country, would have the ability to provide service between the United States and any EU country. Because nationality-based restrictions would no longer apply, one major barrier to European transnational mergers would no longer exist. EU-based airlines could more freely consolidate, create subsidiary operations, or relocate their businesses to any location within the EU without jeopardizing their rights to fly to the United States. The increased operating flexibility that EU airlines would receive raises questions as to which EU member state’s regulatory oversight and labor laws should apply in particular situations. If an EU airline moved its operations to (or established a subsidiary carrier in) another EU country—perhaps to take advantage of lower wages or other cost savings—questions are likely to arise as to which member state’s regulatory framework and labor laws would apply. U.S. and EU officials agree that both sides must eventually reach some agreement on resolving the nationality clause issue in order to comply with EU law. However, there is no set time frame for when the matter must be settled. In November 2002, the EU called for member nations to renounce their Open Skies agreements with the United States, but did not pursue the request in response to receiving a negotiating mandate. It is uncertain how a prolonged inability to remedy the nationality clause issue might affect U.S. and EU commercial aviation, in part because such issues have never arisen before. U.S. airlines would gain expanded legal traffic rights under a new U.S.-EU agreement. According to U.S. and EU industry and government officials, a new agreement would supersede and be binding on all EU member states, thereby removing most remaining traffic right restrictions. Because the Open Skies agreements between United States and 15 member states have already effectively eliminated traffic restrictions in those markets, there would be no significant gains in traffic rights for U.S. airlines, although EU airlines would gain expanded traffic rights or operations to and from those countries. However, U.S. airlines would gain rights to serve markets, from which they had been previously restricted, with the other 10 EU countries. Extending the Open Skies framework and rights to all EU member states would be necessary in order to prevent a critical imbalance of economic rights from developing that would place U.S. airlines at a potential competitive disadvantage. Unless Open Skies rights were extended to all EU nations, any country with which the United States now has a more restrictive bilateral agreement would be able to benefit from rights negotiated by other countries without itself having to negotiate for those rights. In other words, those countries (and their passengers and airlines) would benefit from the actions of others without “paying” for them–an outcome known as “free riding.” A free-rider scenario would occur between the United States and EU if an airline from a non-Open Skies country were able to operate from an Open Skies country to circumvent restrictions in the home country’s bilateral agreement. For example, a UK airline could originate a flight in France with a commuter aircraft, change to a wide body aircraft at London’s Heathrow airport, and then continue on to the United States to a point not designated under Bermuda 2 with hundreds of additional passengers. This would put U.S. airlines at a competitive disadvantage because the current bilateral agreement with the United Kingdom prevents any U.S. airline from flying similar routes. One key operational right that U.S. airlines would gain is full legal nonstop access to all markets in the 10 countries with which the United States still has more restrictive bilateral agreements or no agreements. The most noteworthy of these 10 is the United Kingdom, because of the amount and value of passenger traffic that moves between the two countries. Bermuda 2’s limits on competition disproportionately affect U.S. airlines because the United Kingdom successfully negotiated for additional traffic rights in the early 1990s. Partly as a result, between 1992 and 1996, the UK airlines’ share of the U.S.-UK market rose from 49 percent to 59 percent. Today, UK airlines still provide more service in the U.S.-UK market, especially into Heathrow. As of May 2004, British Airways and Virgin Atlantic scheduled a total of 43 daily nonstops from Heathrow to the United States, compared to 28 daily nonstops offered by American and United. While a new U.S.-EU agreement could eliminate the legal restrictions on the number of U.S. airlines permitted to operate into Heathrow, capacity limitations would affect the extent to which U.S. airlines would be able to operate there. Heathrow is essentially operating at full capacity, especially at times that are commercially viable for transatlantic operations. According to airline and industry officials, the commercially-preferred times for transatlantic arrivals into Heathrow are between 6 a.m. and 10 a.m., and the commercially-preferred times for transatlantic departures from Heathrow are between 10 a.m. and 2 p.m. These times are based on airlines being able to coordinate their transatlantic flights with feeder flights from their spoke airports. However, as figure 6 shows, the demand for arrival (and departure) slots during these times generally exceeded the available supply. If the U.S. Open Skies framework were extended to all EU countries following the removal of the nationality clause restrictions, U.S. airlines would gain full “fifth freedom rights,” including to and from EU member states with restrictive bilateral agreements. These fifth freedom rights would allow U.S. airlines to operate flights from the United States to any EU country and then beyond to another EU country. However, traffic rights to countries beyond the EU would be limited to those the United States already has under its Open Skies agreements; the EU has no current mandate to grant new “beyond rights.” Open Skies agreements, by definition, grant airlines the unrestricted right to operate fifth freedom flights, which are otherwise limited under the more restrictive bilateral agreements, such as the agreement with the United Kingdom. Airline officials and industry experts maintain that over time, however, fifth freedom rights available from countries that have Open Skies agreements have proven to be of limited commercial value to passenger airlines. (Cargo airlines, on the other hand, greatly value fifth freedom rights. See appendix III for additional information on cargo carriers.) United, for example, attempted to exercise fifth freedom rights for operations with Open Skies countries in Europe, but abandoned those operations after determining that they were not profitable. United officials explained that, with the development of alliances, it is more cost efficient to use alliance partners to provide connecting service into “beyond” markets. Of the U.S. passenger airlines that have fifth freedom rights with EU countries, only two airlines exercise these rights. Northwest operates fifth freedom flights from Minneapolis that stop in Amsterdam and continue to Bombay, India. Delta flies from Atlanta to Bombay using fifth freedom rights over Paris. Under the terms of the more restrictive bilateral agreements with Spain and Greece, U.S. airlines are prohibited from serving those markets by way of code-share flights. This effectively prohibits, for example, passengers using a United ticket from traveling from Albuquerque to Madrid by connecting at Frankfurt, Germany, to a United code-share flight operated by its Star alliance partner Lufthansa. If the Open Skies framework were extended throughout the EU, such prohibitions would be eliminated. Airlines would be able to offer new routings to passengers, and passengers would be free to choose among new options for travel into those countries. Like U.S. airlines, EU airlines would have greater access to international markets. Eliminating nationality clause restrictions included in existing agreements effectively means that, in any new agreement, the United States could recognize the concept of a “European Community” airline. This could mean, for example, that rights originally restricted to designated airlines of the signatory countries would be available to all European Community airlines. In other words, Lufthansa, British Airways, and LOT Polish Airlines would be European Community airlines in addition to being German, British, and Polish airlines. If the United States recognized a European Community airline, it would have the right to operate transatlantic flights directly to and from more EU destinations. Under current Open Skies agreements, the right to establish transatlantic routes between destinations in the signing countries is limited to airlines licensed in and designated by those two countries and is under the ownership and control of the country’s citizens. For example, Air France—an airline licensed and designated by France—is not allowed under existing Open Skies agreements to provide nonstop transatlantic service between cities in the United States and Italy; it can fly only between U.S. and French cities. Under an Open Skies agreement that included an EU nationality clause, Air France would have the right to fly between any EU city and any city in the United States. In theory, Air France could also decide to establish a mini-hub in a city outside of France, where it could potentially begin providing nonstop service into additional U.S. cities. This same flexibility would extend to all EU airlines and to all U.S.-EU markets. In this way, EU airlines, regardless of the EU country in which they were licensed, would have the ability to provide flights into the United States from throughout the EU (see fig. 7). Eliminating nationality-based restrictions would remove a major barrier that has prevented EU airlines from restructuring their operations by merging with another airline or creating significant commercial operations in locations outside their home countries, without sacrificing traffic rights across the north Atlantic. Because international traffic rights are granted by two signatory nations and are tied to national ownership and control, an airline operating an international service cannot merge with a carrier from another EU member state without risking the loss of these U.S. traffic rights. Similarly, because the traffic rights are tied by designation and nationality clause to airlines from particular countries, airlines also cannot move operations into another country and exercise those rights. Eliminating nationality-based restrictions would allow citizens of any EU nation to exercise what is called the “right of establishment.” Under the Treaty of Rome, any EU citizen has the right to establish a business in another EU state. Removing nationality-based restrictions would allow EU airlines to restructure operations, such as merging with another EU airline or relocating in another EU member state, to gain economic efficiencies without losing traffic rights into the United States. For example, EU airlines could relocate operations to or establish subsidiaries in EU member states that have lower average wages and (from a business perspective) more lenient labor laws. Controlling costs associated with labor (i.e., “social costs,” which include wages, benefits, and pensions, and which also define the number of hours in the work week) is important to an airline’s ability to compete with lower-cost or more efficient airlines, because those costs can represent a major portion of an airline’s operating costs. Under an agreement in which the United States would recognize a European Community airline, EU airlines could take the following actions. Acquisitions or mergers—EU airlines could engage in cross-border airline mergers and acquisitions without jeopardizing traffic rights to the United States. Some observers of EU aviation have long believed that the large number of relatively small state-supported airlines created a fragmented, inefficient system burdened with excess capacity. The suggested remedy was consolidation of the European industry. Moving operations to another country—EU airlines could move some or all of their operations to other EU countries without risking the loss of traffic rights. For example, an existing airline, such as Austrian Airlines, hypothetically would be able to move its operations into and establish itself in Poland and still be able to provide service into the United States from anywhere in the EU. Creating subsidiary operations—EU airlines could set up subsidiary operations outside of their home countries that could provide transatlantic service. Because these subsidiaries could be established anywhere in the EU, they could potentially take advantage of lower costs that might be available in some EU countries. Establishing new entrant airlines--EU citizens in one country could establish an airline in another country and provide service into the United States, provided they met licensing and certification requirements. Citizens from Spain, for example, could establish a new airline in Poland and provide service from anywhere in the EU to the United States. While eliminating the nationality clause restrictions may mean that traffic rights are no longer limited, the concept of an airline’s being licensed by a particular EU country remains important for regulatory oversight. For issues concerning safety and security oversight of airlines, all governments maintain an interest in having assurance as to which other government remains responsible for assuring the safe and secure operation of airlines that may fly to or from any given location. While operating a safe, secure carrier is of course important for maintaining consumer confidence in the carrier, ensuring the safe and secure operation of commercial aviation is a fundamental responsibility that is shared by governments and airlines. Under the existing EU framework, this oversight responsibility resides with each country, subject to a framework of European level cooperation and legislation. Thus, one issue that will need to be resolved, if airlines are permitted to shift their operations from one EU country to another, is which country exercises the oversight responsibility. A number of criteria have been suggested for determining which country’s legal and regulatory system should apply. Traditionally, the country that issued the airline’s operating license has been responsible. However, ensuring safety and security would become problematic if an airline relocated its major hub activities to a location possibly hundreds of miles outside the licensing state’s borders. Another possible criteria proposed to determine which state’s systems apply is based on the location of the carrier’s “principal place of business.” But in an industry in which the assets and employees are mobile, what constitutes an airline’s principal place of business is uncertain. While not providing a definition per se, the ICAO Air Transport Regulation Panel and the Organization for Economic Co-operation and Development have suggested a set of guidelines that governments could use in determining an airline’s “principal place of business.” These guidelines propose that a “principal place of business” means the country in which an air carrier does the following: maintains its primary corporate headquarters; regularly provides air transportation service; maintains substantial capital investment in physical facilities; pays income tax and registers its aircraft; and employs a significant number of nationals in managerial, technical, and operational positions. However, questions arise regarding how to measure the extent to which an airline might meet each of these criteria (e.g., defining and measuring “substantial capital investment”). Officials with major airline unions generally support these criteria. The concern of labor groups is that, unless a relatively stringent standard is applied, airlines will move operations to countries specifically to take advantage of lower costs of doing business (particularly with regard to wage rates and labor laws). Doing so is sometimes referred to as adopting a “flag of convenience,” a pejorative term adopted from the maritime industry. The question of which member state’s labor law should apply to a situation is the subject of a current legal challenge brought by employees of the EU low-fare carrier Ryanair at the Charleroi Airport in Belgium. Ryanair is headquartered in Ireland and has bases located in Stansted, United Kingdom; Frankfurt/Hahn, Germany; Stockholm, Sweden; and Charleroi, Belgium. It employs nonunionized pilots. All of Ryanair's pilots, regardless of where they are based, are employed under Irish labor law and pay Irish taxes. In May 2002, Ryanair did not retain three employees after these employees had completed Ryanair’s 1-year probationary period. The employees at Charleroi charge they were wrongfully terminated under Belgian law. The question for the EU courts is whether Ireland or Belgium's labor laws would apply in this instance. Eliminating the nationality clause restrictions from the new U.S.-EU agreement would likely provide new benefits to consumers, airlines, and labor groups. By eliminating the nationality clause restrictions, a new agreement would in effect extend the Open Skies framework to the 10 EU member countries without Open Skies agreements. This could potentially provide the same benefits that consumers, airlines, and labor groups realized after the signing of the current Open Skies agreements. However, because of mitigating circumstances, these benefits will take some time to develop, and they will be contingent on resolving a number of related issues (e.g., de facto access to restricted airports). Experts and industry officials with whom we spoke generally agreed that eliminating the nationality clause restrictions would mainly increase the potential for the following: More U.S. airlines might attempt to provide nonstop service from their hub airports into London’s Heathrow Airport. EU airlines might use their new ability to establish transatlantic routes between U.S. cities and EU destinations outside of their homeland. More transnational mergers might occur between EU airlines. An EU airline might attempt to establish a “flag of convenience” operation—that is, the airline might move some or all of its operations to another EU country with lower wage or other costs. Each of these actions would allow airlines to more freely respond to market forces and consumer demand. As in other instances where government removed restrictions on airlines, such as domestic deregulation and Open Skies agreements, consumers could potentially benefit from increased competition and therefore better service and lower fares. Officials at some U.S. airlines said a major potential benefit of a new agreement would be the opportunity for access to markets restricted by the existing bilateral agreements. Similar to the experience of current Open Skies agreements, U.S. consumers and airlines would benefit from gaining access to the 10 restricted markets, such as the United Kingdom, Spain, Ireland, and Greece. The likely source of the greatest benefit would be London’s Heathrow Airport, since it is the largest destination for U.S. travelers (see fig. 8). If a new agreement extended the U.S. Open Skies framework to all EU member states, it would remove the restrictions of the Bermuda 2 agreement. U.S. airlines with no current access to Heathrow would gain the right to operate there. For example, Continental Airlines, which currently has no Heathrow access with its own aircraft, would be able to begin service into Heathrow from any U.S. airport, including its hubs at Newark, Houston, and Cleveland. (Continental now operates flights from those hubs into London’s Gatwick Airport and code-shares with Virgin Atlantic into Heathrow.) Because Heathrow is the major U.S.-EU gateway, many U.S. airlines view the opportunity to gain access to this market as a positive benefit. Access to Heathrow by additional U.S. airlines represents potentially positive benefits to consumers and airlines. Consumers would benefit from gaining greater choice of airlines, service from more U.S. destinations, and possible competitive pressures on price. For example, if all U.S. airlines now serving London by flying into Gatwick switched their operations to Heathrow, London-bound consumers would benefit because access to central London is faster and easier from Heathrow. In addition, consumers in Denver and Detroit, who now have flights into Heathrow only on British Airways, would likely benefit from the additional competitive presence on those particular routes. Airlines that do not now have access to Heathrow would benefit from being able to carry passengers into a valued destination. Even though these airlines operate to London’s Gatwick Airport, they have reported losing high-yield business passengers and corporate accounts to competitors because of their inability to provide service to Heathrow. For consumers and airlines to realize such benefits, however, airlines would first need to gain de facto access to airport slots, gates, and terminal space. Because Heathrow is already operating essentially at full capacity, a new entrant airline would have to gain access through the existing slot allocation process, which provides limited opportunities for new entrants--airlines with no more than four slots per day (the equivalent of two daily takeoffs and landings). Each year, the number of slots that become available through the normal slot allocation process is equivalent to about five daily takeoffs and landings. Existing EU slot allocation regulation requires the slot coordinator to set aside 50 percent of any slots that become available for distribution to new entrants. Before those slots are made available, however, incumbent airlines have limited rights to acquire any open slot and substitute another they already hold. Incumbent airlines can use this process to “trade up” slots they hold at less desirable times for newly available slots that might be for more commercially advantageous times. This effectively relegates the slots available to new entrants to commercially less desirable times. However, once a new entrant does obtain slots, it can gain slots at more commercially viable times through a grey market, which is used by airlines to trade slots. These trades are allowed during any point of the year and are often done so for payment. The EU does not officially condone this grey market, although a 1999 decision by a UK court found this system to be acceptable within European law. The EU has recently initiated proceedings against this system. While some officials have pointed out that the slot allocation process does give priority to allocating slots to new entrant airlines, there is disagreement regarding the effectiveness of this process in assisting new entrants. Once a new entrant airline acquires five or more daily slots, it is no longer considered a new entrant and then must compete for any available slot as an incumbent airline. To gain additional slots for transatlantic operations, a new incumbent airline with five slots would have to compete against British Airways, with over 500 daily slots, and Virgin Atlantic, with just over 30 daily slots (in the Summer 2004 scheduling season). Another option for U.S. airlines to gain slots is based on the reallocation of slot resources between alliance partner airlines. Some airline officials said that, once the legal restrictions are removed, international alliances with substantial numbers of Heathrow slots could reallocate these slots between alliance members, thereby providing U.S. airlines with some access to Heathrow slots. Some European airlines have stated that this option would be discussed within the alliance; others maintained that alliance partners would be hesitant to trade or sell slots to other alliance members, because the Heathrow operations likely add considerable revenue to their own network. Alliance partners may also be hesitant to trade or sell slots because, although the alliances are established through legal contracts, past experience has shown that airlines can and do move out of alliances. Thus, a slot sold or traded might be permanently lost or used against the airline in the future. Finally, even if an alliance partner may have a slot that theoretically could be put to more productive commercial use by trading or selling it to another alliance partner, other capacity constraints at Heathrow could prevent its use for transatlantic operations. For example, a slot (with its associated Jetway and terminal facilities) used for 40-passenger turboprop operations could not readily be transformed for use by a 400 passenger Boeing 747. Given the limits imposed by these slot allocation options, it may be some time before the potential benefits for U.S. airlines and passengers emerge. Some European airline officials have pointed out that gaining access to slots, gates, and terminal space at Heathrow can be done over time. They cite Virgin Atlantic as an example of an airline that originally obtained slots at less preferable times and, over time, acquired additional slots and traded them with other airlines. Through trading, Virgin Atlantic gained a number of slots at prime times. In Virgin Atlantic’s case, it applied for and obtained six daily slots once the UK government designated it as one of the two British airlines allowed to provide transatlantic service from Heathrow in 1991. By 1996, it had obtained approximately 15 daily slots, which rose to 28 daily slots by 2001. Airline officials reported that between 1996 and 2001, the airline gained an additional five pairs of daily slots to the United States. As of the 2004 summer schedule, Virgin Atlantic will have just over 30 daily slots. That total includes four pairs of slots that Virgin Atlantic was able to acquire this year for about 20 million British pounds (approximately $36 million). Airlines may also gain access if capacity at Heathrow expands. Heathrow’s capacity may increase over time through capital improvements and changes in operations. The first phase of a new terminal at Heathrow, Terminal 5, is expected to be operational in 2008 and could ease terminal and aircraft parking capacity constraints related to passenger holding areas and aircraft gates. With the completion of the second phase in 2011/2012, this new terminal is expected to handle 30 million annual passengers and will include 45 aircraft parking stands. British Airways is expected to be the principal tenant at the new terminal, and its relocation there will allow other airlines (possibly new entrants) to gain access to facilities in other terminals. However, the new terminal will have no direct impact on the number of available runway slots. BAA, plc, is examining the potential to implement a “mixed mode” runway operation, which would allow both runways to be used for landings and takeoffs rather than assigning one runway for landings only and one runway for takeoffs only. The examination will need to take account of potential noise and air quality implications. There are no official estimates of the impact on runway capacity of a change to ”mixed mode,” but BAA, plc, has suggested that this change could increase slot capacity by 10 per hour. However, it is unclear if the UK government will approve this change due to opposition by local communities related to the expected increase in noise. The UK government’s 2003 report on the future of air transport supported further development of Heathrow, including a new runway and additional terminal capacity, but only after a new runway at London’s Stansted Airport was finished and only if stringent environmental limits could be met. The report indicated that any additional future enhancements to Heathrow’s capacity would be completed within the 2015-2020 period. A report commissioned by the EU on the effects of different slot allocation approaches concluded that current EU slot regulation provides incumbent airlines an advantage and makes it difficult for new airlines to obtain slots to introduce new or more frequent service. It also concluded that if the EU adopted various market mechanisms (such as secondary trading, increased slot prices, or slot auctions), higher passenger volumes would occur. The report did not recommend that the EU commission require divestiture of slots as a condition of airline mergers. However, in its approval of the Air France–KLM merger, to ensure that new entrant airlines could provide new competitive service on certain markets, the EU commission sought the surrender of 94 daily slots from Air France and KLM. Generally, incumbent airlines object to such events, arguing that they own the slots and would have to be compensated for them. BAA, plc, officials also voiced the concern that forcing incumbents to surrender slots each time a new entrant wanted to gain access to the airport was not permitted in the EU slot regulation and would create an undesirable precedent, in part because of the instability it would create for incumbent airlines’ operations. While a new U.S.–EU agreement would provide U.S. airlines with legal access to markets that are now restricted, airline officials stated that, without actual physical access, these new legal rights would be meaningless. This is especially true for access to Heathrow. The current slot allocation process at Heathrow gives incumbent airlines an advantage to help maintain and improve their position, making it difficult for new entrants to gain effective commercial access. Therefore, it may take an indeterminate amount of time before consumers and airlines derive significant benefits from a more open Heathrow. Absent the nationality clause restrictions, EU and U.S. airlines could begin offering new transatlantic service between more cities. New competition has the potential for generating various benefits for consumers: transatlantic service to and from more cities, increased choices—and possibly pressure on prices—on existing routes, and pressure on airlines to provide higher quality service. Some officials said that increased consumer demand for nonstop point-to-point service could spur airlines to develop new city-pair markets. For example, a carrier could start nonstop service from Berlin to Kansas City, neither of which, as of May 2004, had direct transatlantic service. The Boeing Company expects that consumer preference for nonstop flights, congestion at major airports, and new technology will push airlines to develop new nonstop city pairs. According to Boeing officials, between 1980 and August 2001, as the transatlantic market has developed, the total number of city pairs served with nonstop flights more than doubled, in part because airlines were able to connect those markets using aircraft with smaller capacities (see fig. 9). Boeing projects that, between the United States and EU, an additional 114 city pairs could support nonstop service with 250-seat aircraft. It cites San Francisco–Milan, Houston–Madrid, and Seattle–Frankfurt as possible examples. The development of new nonstop service or new competition in existing markets would offer consumer benefits. Clearly, consumers would benefit from having nonstop service on new city-pair markets rather than connecting service. If airlines chose to compete on other airlines’ existing routes, the presence of additional airlines on a route could not only provide consumers with more choices of flight times during the day, but also could act as a competitive force on service quality and price. In the United States, consumers at dominated airports experience higher average airfares than do those at more competitive airports. Although removal of the nationality clause restrictions would theoretically open the door for new competition in various markets, airlines will likely face significant operating barriers in those markets, particularly at dominated hub airports. In the past, we reported that new competition at key domestic airports was inhibited by a lack of access to slots and airport facilities. In 2004, an EU report noted that competition at certain key airports continued to be inhibited by lack of available slots at attractive times. The report listed Heathrow, Frankfurt, Madrid, and Paris’s Charles de Gaulle airports as having more demand for slots than available capacity, either throughout the day or at peak times of the day. As in the United States, major European gateway airports also tend to be dominated by a single carrier or alliance. As table 6 shows, each of the EU’s major airports has one airline that controls a much larger percentage of scheduled seat capacity than its next largest competitor. Transatlantic flights to and from these airports are generally dominated by one alliance. For example, Delta and Air France, both members of the Sky Team Alliance, fly 100 percent of the nonstop flights between Atlanta and Paris’s Charles De Gaulle airport. United and Lufthansa, both part of the Star Alliance, operate 100 percent of the nonstop flights between Frankfurt and Washington Dulles and 73 percent of the nonstop flights between Frankfurt and Chicago. Sales and marketing practices—which include frequent flier programs and corporate incentive programs—may also impede competition. They do so by reinforcing market dominance at hubs and impeding successful entry by new carriers and existing carriers into new markets. Practices such as frequent flier programs encourage travelers to choose one airline over another on the basis of factors other than obtaining the best fare. Such factors have a tempering effect on the extent to which EU airlines may seek to launch competitive transatlantic service at these EU gateways. EU airline officials with whom we spoke said they have no plans to establish a significant presence in the hub airports of other airlines. Officials said it would be difficult to successfully compete in a hub against an incumbent airline because of the inherent advantages airlines maintain in their own hub airport. More recent experience has shown however that it is precisely the existence of high premium routes that have attracted low-cost carriers to introduce new competition at or near those high-fare markets, although they often use secondary airports. Experience in both the United States and Europe has shown that low-cost carriers have increased their presence in major hub airports or secondary airports in major hub markets (i.e., Southwest and ATA at Chicago’s Midway airport and easyJet at Gatwick). Consolidation among EU airlines may be more likely if nationality clause restrictions were eliminated, and could lead to a more efficient EU industry structure. Generally, removal of regulatory barriers to industry structure, when accompanied by appropriate competition-preserving antitrust policies, is expected to improve operating efficiencies and promote innovation. The U.S. Department of Justice’s Horizontal Merger Guidelines recognize that competition usually encourages firms to become more efficient. Mergers can also generate significant efficiencies by permitting a better utilization of existing assets, enabling the combined firm to achieve lower costs than either firm could have achieved without the merger. In turn, that may result in lower prices, improved quality, enhanced service, or new products. At the same time, however, because the motivation behind mergers is the prospect of financial gain, mergers are restricted under both U.S. and EU antitrust laws in their ability to create or enhance market power or to facilitate its exercise. Market power in this instance is the ability of a firm to profitably maintain prices above competitive levels for a significant period of time. Thus, while consolidation in an industry through mergers can produce efficiencies and potential consumer benefits, it remains important for antitrust or competition authorities to guard against market abuses. Analyses we have previously conducted of actual or proposed mergers in the U.S. domestic market suggest that mergers often have both positive and negative effects. Mergers have the potential for creating positive benefits to consumers in such ways as the following: In markets where each of the merging airlines had a relatively limited presence, combining their limited shares can create an additional effective competitor. Consumers in some markets would benefit from having access to new “on-line” service. For example, when the European Commission recently approved the merger of Air France and KLM, it reported that KLM consumers would gain access to more than 90 new destinations and Air France customers would be offered 40 new routes. Members of the frequent-flyer programs of the merged airlines would be able to use their miles to reach an expanded number of destinations. Some industry officials and experts said a U.S.–EU agreement removing nationality restrictions would facilitate the opportunity for more cross-border mergers in the EU aviation industry, because EU airlines would not lose important traffic routes into the United States as a result of merger. However, other officials said that restrictive bilateral agreements still held with three other major aviation nations would limit the extent to which airlines would seek to consolidate. The three nations most frequently mentioned were Japan (because of the size and value of the existing market), China (because of the size and value of the potential market), and Russia (because of the implications of overflight rights). For example, Russia and Germany currently have a bilateral agreement that restricts routes and overflight rights to Russian and German airlines. If Lufthansa Airlines were to merge with another EU carrier, it is not clear that Russia would extend these rights to the merged airline. If an EU carrier did not have overflight rights from Russia, its flight times and costs for operations to other Asian countries would increase significantly. These officials said airlines that had received traffic rights and other operating considerations through such agreements might be unwilling to risk losing them through a merger. A merger can have negative effects on consumers in those markets where the merger reduces the number of effective competitors. This negative effect is increased if the two airlines that merge have significant overlapping markets or if the merger creates an airline that dominates a particular market. Industry experts generally agree that, with dominance in a market, airlines can wield market power and make entry into those markets by would-be competitors more difficult. Therefore, an airline that can wield this market power has the ability to raise fares when unconstrained by competition. Consolidation also raises the possibility that competition in key markets will be reduced, thereby potentially affecting fares and service. The current merger between Air France and KLM illustrates this point. The European Commission noted that the merger would also eliminate or significantly reduce competition on 14 routes, including 3 transatlantic routes (Amsterdam–New York, Amsterdam–Atlanta, and Paris–Detroit). Air France and KLM have agreed to surrender slots at Amsterdam and Paris, but it is unclear if other airlines will provide effective competition in those markets. In the absence of specific merger proposals, it is not possible to project the extent to which such positive and negative effects would be present. Some aviation experts maintain that the likely outcome of consolidation is the solidification of three “mega” alliances. These “mega” alliances would provide the vast majority of international aviation service and would be solidified around the Star, SkyTeam, and oneworld alliances, with the major U.S. and EU airlines providing the vast majority of transatlantic service. Some experts question the long-term viability of the existing structure of smaller EU national airlines, such as Austrian Airlines or TAP Airlines. These experts project that such smaller airlines may become regional or niche airlines serving limited markets. While labor groups and some other stakeholders are concerned that EU airlines may attempt to achieve lower costs by relocating operations or establishing subsidiaries in EU member states that have lower social costs and labor standards, a number of major obstacles could limit airlines from establishing such “flag of convenience” operations. Increased competitive pressure resulting from any new U.S.-EU agreement may lead airlines to seek reductions in operating costs. Because labor represents the single largest portion of these costs, labor groups have expressed concern that EU airlines might consider relocating to EU countries with—from an operating standpoint—more favorable labor laws. Differences in labor costs and labor law among EU member states certainly exist. According to a 2003 EU study, the newest 10 member states had an average gross domestic product that is 40 percent of the average for the 15 other member states. The EU has adopted a common set of rules with regard to certain labor policies, such as gender equality, nondiscrimination, and health and safety. Even so, specific regulations and enforcement authority remain with the member states. Additionally, EU member states still maintain their own national labor laws, some of which may provide airlines with more favorable labor environments. Differences remain, for example, in how EU member states regulate collective bargaining. For instance, an EU report stated that the new EU member nations had generally very weak collective bargaining regulations when compared to the other EU member nations. According to labor representatives, the ability of airlines to relocate to or establish subsidiary operations in other member states would enable airline management to replace the existing workforce with lower-wage workers. Additionally, EU officials stated that there is no EU law regulating the representation, in collective bargaining, by a single employee organization. As a result, workers in the same “craft” (i.e., pilots or flight attendants) employed by a single company but located in different EU member states cannot be represented by a single employee organization. There are, however, common rules concerning the right to information and consultation of employees in Community-scale undertakings and Community-scale groups of undertakings. According to labor representatives, employee representation rights are critical to preventing downward pressure on wages. These rights include single representation for all members of an employee group, including those of subsidiaries and holding companies; the ability to negotiate an agreement; and effective enforcement of a negotiated agreement. Unless labor gains some of those rights, airlines will be able to establish subsidiaries and then substitute lower-wage labor for the existing workforce. A study prepared for the European Cockpit Association, the association of each member state’s pilots unions, argues, that with the increase in inter-company and cross-national alliances, trade unions and employee associations based in single countries and without inter-union networks could be left without an effective voice in the future restructuring of the industry, such as determining where work will be located and under what conditions. While many aspects of this issue remain unsettled, we did not find substantial evidence to indicate that airlines would relocate operations or establish “flag of convenience” subsidiaries in lower-wage EU countries, at least in the short term. None of the EU airline officials we interviewed indicated a desire to relocate to a low-wage country, citing company branding and markets as being more important in driving business decisions than low-wage labor. Airline officials said that commercial aviation differs from other industries because the product (air travel) must be produced close to the customer base (population and economic centers). Consequently, airlines need to maintain their major operations at key economic centers, none of which are located within the lower-cost countries. Another example that suggests factors other than low-wage labor drive an airline’s business decisions is that EU regional airlines, not bound by international bilateral agreements, have not pursued movement to countries with lower wages and social costs over the last 11 years, despite the fact that such movements were possible after the creation of a single European aviation market. Despite the fact that pilots and flight attendants are inherently mobile and could theoretically travel from lower-cost areas to do their work, airlines said it was preferable to locate flight personnel close to the base of operations. Therefore, airlines will have to provide a competitive, market-based compensation package to retain qualified employees. For example, an airline that chose to have a major base of operations in Paris would need to hire employees paid at “market” salaries. To run its daily operation from Paris to Hong Kong, Cathay Pacific Airways, for example, employs 70 French citizens at its Paris offices and Charles De Gaulle Airport. To compete in the labor market for qualified employees, Cathay officials said that it must offer a compensation package that is competitive with that offered by other airlines. If an airline decided to move its operations to another EU country or import low-wage labor from there, it appears unlikely they would have access to a sufficient supply of appropriately trained personnel. For example, according to EU and airline officials, the number of pilots available in these countries who are trained on aircraft used by the airlines is relatively small. Finally, airline and government officials noted that both U.S. and EU pilots have negotiated scope clauses that limit the airline’s ability to substitute workers from lower-wage subsidiaries for its current workforce (i.e., engage in “labor substitution”). Moreover, available evidence suggests that the creation of the EU’s single market has not led to labor substitution. A report by the UK Civil Aviation Authority stated that, since EU aviation deregulation and the creation of a single market, the United Kingdom had not had any airlines reflagging to a more lax regulatory regime or workers displaced by cheaper workers from other countries in the EU, nor had any UK airlines lost any market share to airlines from lower-wage EU countries, despite the fact that the United Kingdom is one of the higher-wage EU countries. While there currently appears to be little evidence of serious consideration of relocation or establishment of subsidiaries for access to low-wage labor, the removal of the nationality clause restrictions and the accession of 10 lower-wage member states into the EU does change the market dynamic. EU labor groups said that the benefits of relocating business operations among the 15 countries that comprised the EU prior to May 2004 were limited, since there was not a large wage and social cost disparity between them. Now, the disparities are greater. In addition, one U.S. labor representative said that, while there initially may be little economic incentive for established transatlantic EU airlines to move their operations to countries with lower costs and labor standards, new entrants could use the change in regulatory structure to gain a competitive advantage. Furthermore, over time, there may be economic incentives for established EU airlines to move their principal places of business or to establish subsidiaries in countries with lower labor standards. Some airline officials cite low-cost carrier Ryanair as an example of a carrier that is taking advantage of the fact that the EU’s single market is still governed by individual member states’ labor laws, and that pay and working conditions are not subject to collective labor agreements at the European level. Because of this concern, U.S. labor representatives have proposed that certain protections be included in any draft U.S.-EU agreement that would reduce the incentive for airlines to take advantage of "flags of convenience.” One proposal would be to include a definition of "principal place of business" in the draft to help clarify which set of laws would be applied to a given carrier. Including this definition would make it harder for major EU airlines to establish subsidiaries in lower labor standard countries and have those laws applied to them. Setting this standard would also clarify which country would be responsible for overseeing and enforcing safety and security requirements for the airlines. The success of low-cost carriers like Ryanair and Southwest raises the question of whether existing low-cost carriers could successfully compete in the transatlantic market. Typically, low-cost carriers have succeeded in the domestic market by providing point-to-point service, often at less congested airports. These airlines achieve their comparative cost advantages through lower operational costs (often gained through using a single aircraft type), and greater productivity. Recently, low-cost carriers have begun to compete with network airlines by offering long-haul (transcontinental) service. Under current Open Skies agreements, both U.S. and EU low-cost carriers can provide nonstop flights between the U.S. and the home countries of the EU low-cost airlines. However, it is unclear how low-cost carriers would compete on transatlantic routes. Key aspects of the low-cost carrier business model, notably the higher relative productivity of labor and aircraft and the use of a single fleet type, are more difficult to achieve on transatlantic routes. A new aviation agreement without nationality-based restrictions between the United States and the EU could create additional benefits for consumers and airlines, but would require oversight from antitrust authorities to ensure that the benefits of more open markets in the EU accrue to the traveling public. The existing bilateral aviation agreements between the United States and individual EU countries would need to be modified to resolve legal concerns within the EU, namely the nationality clause. Depending on the outcome of negotiations between the United States and the EU, the changes could be relatively minor or could result in a comprehensive opening of the U.S. and EU markets. With a new agreement that removed the nationality clause restrictions and expanded the Open Skies framework, U.S. consumers and airlines could benefit from increased access to new destinations within the EU, lower fares from more efficient route networks, and potentially more competitive routes. As discussed in this report, such benefits may be limited because the current alliance structure and bilateral agreements already provide many benefits, and because congestion and limited access to key airports may mitigate or delay the potential benefits. Resolving the EU legal concern over the nationality clause could lead to continued consolidation among airlines within the EU and potentially stronger ties between U.S. and EU airlines. However, mergers such as that between Air France and KLM raise questions about their impact on U.S. consumers because of their antitrust-immune alliance partnerships with U.S. airlines. For example, how might the Air France-KLM merger affect the international operations of their U.S. partners, Delta, Continental and Northwest? In evaluating the potential effects of such a scenario, how would regulators separate the effects or influences of airlines’ international operations from their domestic operations? Since other major U.S. airlines participate in alliances with EU airlines, further European industry consolidation would continue to raise such questions. In the absence of any significant competitive pressure from low-cost carriers flying between the United States and the EU, there is a risk that beneficial elements of potential restructuring could be offset by a reduction in competition between alliances. Antitrust authorities in the United States and EU will need to be vigilant to safeguard the benefits that could accrue from the changing market structure. The net effect on airlines and consumers will depend on (1) when and to what extent U.S. airlines might gain access to markets that are now restricted, and (2) the business strategies that U.S. and EU airlines adopt. Obviously, the outcomes of any of these developments cannot be predicted. For example, low-cost airlines–which have often been a source of innovation–may find ways to alter their traditional business plans in ways that would make them a competitive alternative to major network carriers in transatlantic service. Past experience has shown that removing government restrictions on aviation (e.g., through domestic deregulation or Open Skies agreements) provided benefits to consumers, airlines, and the industry’s workforce. Because those significant benefits have already been realized, the benefits associated with additional liberalization of the U.S.-EU markets should be similar in nature but incremental in scope. It appears that the greatest source of likely benefits to both U.S. airlines and consumers lies in gaining de facto access to London’s Heathrow Airport. How the benefits from access to such restricted markets may compare to those already realized from the other 15 Open Skies agreements ultimately depends on the extent of the increase in competition and changes in airline operations and passenger traffic. We provided a draft of this report to DOT and State for their review and comment. Neither DOT nor State offered written comments, but did provide technical corrections, which we incorporated as appropriate. In oral comments, DOT’s Deputy Assistant Secretary, Office of the Assistant Secretary for Aviation and International Affairs, noted that concluding a new agreement with the EU that would further liberalize transatlantic aviation would provide significant benefits to consumers, airlines, communities, and labor interests on both sides of the Atlantic. DOT believes that establishing a regional air transport agreement between the United States and the 25 members of the EU would establish a template for a more competitive aviation regime on a worldwide basis. Finally, DOT noted that it remains committed to achieving that goal and securing the benefits that it could bring. We also provided selected portions of a draft of this report to the European Commission; airlines; BAA, plc; the Airline Pilots Association; and other groups cited to verify the presentation of factual material. We incorporated their technical clarifications as appropriate. Unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will provide copies to relevant congressional committees; the Honorable Norman Y. Mineta, Secretary of Transportation; the Honorable Colin L. Powell, Secretary of State; and other interested parties, and will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or Steve Martin at 202-512-2834. Other major contributors are listed in appendix VI. At the request of the Chairman and Ranking Minority Member of the Senate Committee on Commerce, Science, and Transportation, and the Chairman and Ranking Minority Member of that Committee’s Subcommittee on Aviation, we examined three issues relating to potential changes to existing Open Skies aviation agreements with European Union member states. Specifically, our objectives were to answer the following questions: (1) how prevalent are Open Skies agreements between the United States and EU nations, and what has been their effect on airlines and consumers; (2) what are the key ways that commercial aviation between the United States and EU could be changed by the Court of Justice decision; and (3) how might the elimination of nationality clause restrictions in any new U.S.-EU agreement affect airlines and consumers? To determine how prevalent Open Skies agreements are and what their effect on airlines and consumers has been, we reviewed prior research from Department of Transportation (DOT), the UK Civil Aviation Authority, the EU Directorate General Transport and Energy (DG TREN), and other aviation research organizations. We reviewed documents from the Department of State (State) to identify the EU member nations with Open Skies agreements and reviewed the five bilateral agreements the United States has with 5 of the 10 non-Open Skies EU member nations. (The United States does not have any relevant aviation agreements with Cyprus, Slovenia, Estonia, Latvia, or Lithuania.) We interviewed officials from these agencies to confirm that the information in these documents and reports were correct. To determine the effect of Open Skies, we looked at the growth of transatlantic passenger and freight traffic, and we analyzed historical data on airline passenger and freight traffic. We used DOT’s T-100 on-flight data to determine the total number of passengers and the total weight of freight and mail volumes that flew between the United States and the EU from 1990 to 2002. U.S. and foreign airlines are required to report all nonstop segments in which at least one point is in a U.S. state or territory. To facilitate analysis of the T-100 data, we contracted with BACK Aviation Solutions (BACK), an aviation-consulting firm. BACK obtains the DOT data and makes certain adjustments to these data, such as correcting recognized deficiencies in the airlines’ data submissions, when these submissions have not met DOT’s standard of 95-percent accuracy. To determine the reliability of DOT’s T-100 data and BACK’s product, we (1) reviewed existing documentation from DOT and BACK about the data and the systems that produced them, (2) interviewed knowledgeable agency and company officials, and (3) performed electronic tests of the data. We concluded that the data were sufficiently reliable for the purposes of this report. To determine the amount of nonstop or connecting service available between selected U.S.–EU markets, we analyzed airline flight schedule information submitted to Innovata by U.S. and EU airlines for May 2004. Innovata, whose clients include all major North American airlines, maintains comprehensive airline schedule data files based on information they collect, verify, and aggregate from the airlines. We purchased and accessed Innovata data through Sabre’s FlightBase airline scheduling software. To determine the reliability of the Innovata data and Sabre’s product, we (1) reviewed existing documentation from Innovata and Sabre about the data and the systems that produced them, (2) interviewed knowledgeable company officials, and (3) performed electronic tests of the data. We concluded that the data were sufficiently reliable for the purposes of this report. When analyzing the scheduled service in markets, we selected the largest U.S. and EU airports in terms of passenger traffic, based on airport categorization by the Federal Aviation Administration (FAA) and the Airports Council International. While this does not include all airports within the United States or EU, the U.S. airports selected accounted for 96.6 percent of the total U.S. passenger traffic in 2002, and EU airport officials stated the EU airports selected comprised the major European airports. We also used the May 2004 schedule data to examine the number of competitors within a given airline market. DOT has in the past defined a “competitor” as an airline or alliance that has a market share of at least 10 percent of available flights. As in prior reports on the effects of changes in competition of proposed mergers or alliances, we adopted that 10 percent threshold. To determine the number of competitors within each market, we identified the best level of service provided and the competitive alternative. For example, if a market has nonstop service, that would be considered the best level of service. However, one stop/single on-line connecting service may be a valid competitive alternative to nonstop in some markets (e.g., among the subset of passengers who are not time- sensitive or may be more sensitive to prices). Therefore, when the best level of service is nonstop, to determine the number of competitors, we counted all airlines that provided either nonstop service or one stop/on-line single connecting service. For markets where the best level of service is one stop/on-line single connecting service, we counted airlines that provided two stop/on-line double connecting service as additional competitors. To determine what the key ways are that commercial aviation between the United States and EU could be changed by the Court of Justice decision, we interviewed officials from DOT, State, DG TREN, the European Union Directorate General for Competition (DG COMP), France, Germany, the Netherlands, and the United Kingdom, as well as from U.S. and EU airlines, EU airports, and EU aviation trade associations. We discussed the implications for European airlines of changes to the nationality clause in existing Open Skies agreements. We also discussed congested airport facility access and environmental regulations to better understand carrier access to EU airports. We reviewed reports recommended by aviation authorities from the European Commission, Germany, Britain and the Netherlands. Finally, we also discussed with EU officials the EU process for airline certification, establishment, and operations. To analyze the potential effect of removing the nationality clause restrictions on consumers and airlines, we interviewed officials from DOT, State, DG TREN, DG COMP, France, Germany, the Netherlands, and the United Kingdom, as well as from U.S. and EU airlines, U.S. and EU airports, and EU aviation trade associations. We also conducted a review of existing research and analyzed airport capacity and demand data from Airport Coordination Limited. These data contain the number of slots available at London’s Heathrow airport and the demand for these slots by airlines. Based on logical tests for obvious errors of completeness and accuracy, we determined that the data were sufficiently reliable for our purposes. To analyze the potential labor effects, we interviewed officials from major U.S. and EU airlines, U.S. and EU labor unions, the EU Directorate General for Employment, labor research organizations, and U.S. and EU agencies. We also conducted a review of existing research and analyzed data from DOT’s “Form 41” database. This database contains financial information that large air carriers are required by regulation to submit to DOT (see 49 C.F.R. Sec. 241). Airlines submit financial data monthly, quarterly, semiannually, and annually to DOT with financial and operating statistics. To facilitate analysis of these data, we contracted with BACK, an aviation- consulting firm. BACK obtains the DOT data and makes any necessary adjustments to these data to improve their accuracy. To determine the reliability of DOT’s Form 41 data and BACK, we (1) reviewed existing documentation from DOT and BACK about the data and the systems that produced them, (2) interviewed knowledgeable agency and company officials, and (3) performed electronic tests of the data. We concluded that the data were sufficiently reliable for the purposes of this report. We also reviewed an industry survey of pilot contracts that included wages and benefits for pilots and pilots of different seniority levels, by airline. The Association of European Airlines (AEA) provided employment information for 15 member airlines. Based on interviews with knowledgeable AEA officials and logical tests for obvious errors of completeness and accuracy, we determined that the data were sufficiently reliable for our purposes. We conducted our work from October 2003 through July 2004 in accordance with generally accepted government auditing standards. Currently, there are generally considered to be nine freedoms of the air. Although these operations are called "freedoms," they are not necessarily available to an airline. Most nations of the world exchange first and second freedoms through the International Air Services Transit Agreement. The other freedoms, to the extent that they are available, are usually exchanged between countries in bilateral or multilateral air services agreements. The eighth and ninth freedoms (cabotage) have been exchanged only in limited instances. (U.S. law currently prohibits cabotage operations.) In addition, airlines are often required to have an operating license to exercise the rights that are available. Fifth Freedom - The right to enplane traffic at one foreign point and deplane it in another foreign point as part of continuous operation also serving the airline's homeland (e.g., Northwest Airlines has "fifth freedom" rights to carry traffic between Tokyo (B) and Hong Kong (C), on services which stop at Tokyo (B) en route between Los Angeles (A) and Hong Kong (C)). Sixth Freedom - This term is applied to fifth freedom traffic carried from a point of origin in one foreign country to a point of destination in another foreign country via the home country of the airline (e.g., KLM carries sixth freedom traffic between New York (A) and Cairo (C), carrying passengers traveling from New York (A) to Amsterdam (B) and on to Cairo (C)). Seventh Freedom - This term is applied to an airline's operating turnaround service and carrying traffic between points in two foreign countries without serving its home country (e.g., Lufthansa operates between New York (A) and Mexico City (C) without serving Germany (B)). Eighth Freedom - This term is used to refer to "consecutive or fill-up" cabotage in which an airline picks up traffic at one point in a foreign country and deplanes it at another point in that same foreign country as part of a service from the home country of the airline (e.g., Singapore Airlines enplanes traffic at Wellington (A) and deplanes it in Auckland (B) as part of its service between New Zealand and Singapore (C)). Ninth Freedom - This term is used to refer to "pure" cabotage in which an airline of one country operates flights and carries traffic solely between two points in a foreign country (e.g., Air France operates flights between Berlin (A) and Frankfurt (B)). While U.S. cargo carriers have also benefited from the 15 Open Skies agreements the United States has signed with EU member nations, the European Court of Justice ruling also affects these carriers. Since 1990, U.S. cargo carriers have experienced a significant increase in volume and operations with the development of a large hub-and-spoke network. With a new agreement that extended the Open Skies framework, U.S. cargo carriers would also likely gain additional traffic rights into markets that are currently restricted. There are unique cargo carrier issues that are not directly linked to the nationality clause, however, and these issues do present concerns to U.S. cargo carriers. Because U.S. cargo carriers rely heavily on night operations, attempts by local communities in EU member states to impose additional restrictions on night flight operations could have an effect on U.S. cargo carriers. Similar to the increase in passenger service, cargo service also experienced a significant increase in volume and operations since the inception of Open Skies agreements. Freighter operations for all carriers flying between the United States and EU increased more then 75 percent from 1990 levels to over 20,000 flights in 2002 (see fig. 12). Using “fifth freedom” rights provided by Open Skies, FedEx and United Parcel Service (UPS)—the largest U.S. all-cargo carriers—expanded operations through hub-and- spoke networks in Paris and Cologne. FedEx currently operates three daily flights from the United States to Paris, one to Frankfurt, and one to London (Stanstead Airport); UPS operates five daily flights from the United States into its European hubs (Cologne, Germany; East Midland, United Kingdom; and Paris, France). These increased freighter operations carried more than double the 1990 freight and mail volumes, so that by 2002 freighters carried over 2.5 billion pounds between the United States and the EU. With the removal of the nationality clause restrictions, U.S. airlines would gain “fifth freedom rights” (i.e., the right to operate flights from the United States to an EU country and then beyond to another country) in EU member states with restrictive bilateral agreements. For example, under “fifth freedom rights,” FedEx is able to transport cargo from Memphis to Paris, deposit some or all of it in Paris, and then pick up new cargo and fly it to Frankfurt. While all U.S. cargo carriers have fifth freedom rights under current Open Skies agreements, under the more restrictive bilateral agreements, such as the agreement with the United Kingdom, these rights are limited. UPS and FedEx make extensive use of fifth freedom rights in many of the EU countries where they have such rights. Under the Bermuda 2 agreement, when U.S. cargo carriers operate a flight from the United States that stops both in the United Kingdom and an airport in continental Europe, that flight is restricted from using some fifth freedom rights to pick up cargo in the United Kingdom and transport it on to the continental destination. For example, FedEx schedules a daily flight from its hub in Newark to its hub in the United Kingdom, and this flight then continues on to Paris. Under the current agreement, FedEx is allowed to drop off cargo in the United Kingdom, but it is not allowed to pick up cargo in the United Kingdom and transport it to Paris. Instead, the plane must travel to France only partly loaded (that is, with the Paris-bound cargo that originated in the United States). Cargo that FedEx receives in the United Kingdom for shipment to Europe must be shipped using a separate charter service. This increases FedEx’s operating costs. A new agreement would remove these restrictions and allow FedEx to utilize its network more efficiently. Increased restrictions on night flight operations could potentially adversely affect the ability of U.S. cargo carriers to operate. At a number of EU airports, local communities are seeking, for environmental reasons, to restrict the extent of night operations. At Frankfurt, for example, German officials are in the process of attempting to ban all nighttime operations at the Frankfurt airport and have cargo carriers move their operations to the Hahn airport—about 80 miles away. Restrictions on these nighttime operations would compromise U.S. cargo carriers’ operations. If cargo carriers need to limit their nighttime operations or move them to other locations, the impact on U.S. cargo carriers could be significant, since they have invested substantial financial resources to develop their distribution networks and airport facilities. For example, FedEx has invested over $200 million to develop its operations at the Charles de Gaulle, Stansted, and Frankfurt airports. UPS also has invested significant sums in its facilities at Cologne, Germany. Therefore, changes in night flight regulations could effectively devalue these investments by reducing the ability of these companies to fully utilize their networks and facilities. In particular, if FedEx were forced to relocate its Frankfurt operations to Hahn, the value of its Frankfurt facilities would be diminished. FedEx officials indicated that if a night ban was enacted, it would limit them with regards to expanding future operations. In addition, if an airport restricted aircraft from landing after midnight, it would force U.S. cargo carriers to eliminate either late pickups or early deliveries. Both FedEx and UPS highlighted this as a huge competitive disadvantage. The EU has little direct influence in local attempts to invoke such restrictions, as such actions remain a local- and country-specific matter. Although the EU issued a directive in 2002 on the establishment of rules and procedures with regard to noise-related operational restrictions at EU airports, all actual regulations are established and implemented by the individual member states. The EU directive supports the ICAO “balanced approach,” which outlines a standard set of procedures for establishing aircraft noise regulations. This approach and the EU directive attempt to harmonize the procedures used by individual member states. However, since the pressure to restrict night operations usually originates with local communities surrounding airports, local governments have enacted night flight restrictions in compliance with local citizen demands or local government regulations. Therefore, noise restrictions can greatly vary between member states. If aviation stakeholders feel that member states have not followed the procedures established under the ICAO “balanced approach,” they can appeal to the EU. If the EU rules in favor of the aviation stakeholder, the member state is required to amend the regulation. However, according to EU officials, under EU law, during the infringement proceedings, there is no requirement for the EU member states to stop or delay these noise regulations. Officials stated that, while the EU does not have any enforcement mechanisms and the infringement procedure is more of a political pressure tool, the EU treaty does provide for accelerated procedures once the procedure is placed on the Court of Justice agenda. (millions) A major consolidation event—the recently approved Air France-KLM merger—is already occurring within the framework of existing Open Skies agreements. This merger involves two airlines owned and controlled by national citizens from countries that had both signed Open Skies agreements with the United States. It has been structured in such a way as to protect the traffic rights granted under bilateral agreement for each airline. This meant that the merger needed to include a series of corporate governance and ownership adjustments not normally found in a traditional merger. For example, the actual merger includes a 3-year transitional shareholding structure that will ensure that majority ownership of Air France is with French citizens and that the majority ownership of KLM is under Dutch citizens. The basic structure attempts to preserve the brands and identity of each airline by establishing a French holding company, Air France-KLM, which will own 100 percent of the economic rights for both Air France and KLM. To protect KLM’s traffic rights, Air France-KLM will only control 49 percent of the voting rights, with 51 percent being held by Dutch foundations and the Dutch government. After 3 years, the Air France-KLM holding company will own 100 percent of both airlines. European officials believe that the EU’s February 2004 approval of the Air France-KLM merger signals the start of consolidation of the European aviation industry. If a new U.S.-EU agreement eliminates the nationality clause restrictions, the need to structure mergers to protect traffic rights across the north Atlantic will likely be eliminated. Mergers among major European airlines will inevitably raise questions about how existing global alliances will be affected. Because of the alliance with U.S. partners, mergers will exert effects on U.S. airlines and consumers. Air France and KLM are in separate alliances with different major U.S. airlines (Delta and Northwest, respectively), and DOT has granted antitrust immunity to both of these alliances. In addition, Delta, Northwest, and Continental agreed to a major domestic code-sharing partnership in 2002, which was permitted with certain conditions by DOT. In addition to those named above, Amy Anderson, David Hooper, Jason Kelly, Joseph Kile, Grant Mallie, Sara Ann Moessbauer, Tim Schindler, Stan Stenersen, John Trubey, and Matt Zisman made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Transatlantic airline operations between the United States and European Union (EU) nations are currently governed by bilateral agreements that are specific to the United States and each EU country. Since 1992, the United States has signed so-called "Open Skies" agreements with 15 of the 25 EU countries. A "nationality clause" in each agreement allows only those airlines designated by the signatory countries to participate in their transatlantic markets. In November 2002, the European Court of Justice ruled that existing Open Skies agreements were illegal under EU law, in part because their nationality clauses discriminated against airlines of other EU nations. The United States and the EU have been negotiating revisions to these agreements. Experts agree that removing the nationality clause is central to any new agreement. GAO was asked to report on (1) how prevalent Open Skies agreements are and what their effects on airlines and consumers are, (2) what the key ways that commercial aviation between the United States and the EU could be changed by the Court of Justice decision are, and (3) how the elimination of nationality clause restrictions might affect airlines and consumers. GAO's work included both analyzing data on transatlantic air service and evaluating information from and positions of industry officials, subject-matter experts, and stakeholder groups. GAO is making no recommendations. Open Skies agreements have benefited airlines and consumers. Airlines benefited by being able to create integrated alliances with foreign airlines. Through such alliances, airlines connected their networks with that of their partner's (e.g., by code-sharing agreements), expanded the number of cities they could serve, and increased passenger traffic. Consumers benefited by being able to reach more destinations with this "on-line" service, and from additional competition and lower prices. GAO's analysis found that travelers have a choice of competitors in the majority of the combinations of U.S.-EU destinations (such as Kansas City-Berlin). The Court of Justice decision could alter commercial aviation in four key ways. First, it would essentially create one Open Skies agreement for the United States and EU, thereby extending U.S. airline access to markets that are now restricted under traditional bilateral agreements. Notably, more U.S. airlines would gain legal access to London's Heathrow airport, which is restricted by the U.S. agreement with the United Kingdom. Second, it would also allow EU airlines to operate into the United States from airports outside their own countries. Third, for EU airlines, a revised agreement could alleviate some obstacles to merging with other EU carriers or creating subsidiary operations in other countries. Finally, the possibility that EU airlines might move some operations into other EU nations raises concerns about which EU nations' regulatory and legal systems would govern. U.S. airlines and consumers are likely to benefit from the elimination of the nationality clause, but the benefits may not be realized in the near term. Both U.S. consumers and airlines would benefit from gaining access to markets restricted under bilateral agreements, especially London's Heathrow airport, though capacity considerations there are likely to postpone and limit such access. Consolidation within the EU aviation industry could occur, with the effect on U.S. consumers varying, depending on whether consolidation creates additional competition or reduces it in particular markets. EU airlines could begin new transatlantic service in countries other than the airline's own, which would provide consumers with additional competitive choices. However, those airlines would likely face difficulties in competing successfully at another airline's hub.
The Army Guard is the oldest component of any of the uniformed services. It traces its roots to the colonial militia, and claims a “birth”of 1636. Today, the Army Guard exists in 54 locations that include all 50 states, the District of Columbia, and three territories: Guam, the Virgin Islands, and Puerto Rico. There are about 2,300 Army Guard units within these locations and over 350,000 Army Guard members. During peacetime, Army Guard units report to the adjutant generals of their states or territories, or in the case of the District of Columbia, to the Commanding General. Each adjutant general reports to the governor of the state, or in the case of the District of Columbia, the mayor. At the state level, the governors have the ability, under the Constitution of the United States, to call up members of the Army Guard in times of domestic emergency or need. The Army Guard’s state mission is perhaps the most visible and well known. Army Guard units battle fires or help communities deal with floods, tornadoes, hurricanes, snowstorms, or other emergency situations. In times of civil unrest, the citizens of a state rely on the Army Guard to respond, if needed. During national emergencies, however, the President has the authority to mobilize the Army Guard, putting them in federal duty status. While federalized, the units answer to the Combatant Commander of the theatre in which they are operating and, ultimately, to the President. Even when not federalized, the Army Guard has a federal mission to maintain properly trained and equipped units, available for prompt mobilization for war, national emergency, or as otherwise needed. Nonfederalized Army Guard members’ pay and allowances are paid with state funds while federalized Army Guard members’ pay and allowances are paid with federal funds. Typically, Army Guard members enlist for 8 years and are entitled to a number of benefits while serving in the Army Guard, including those for health care, life insurance, and other state-specific benefits. After their enlistment periods, former Army Guard members are entitled to veterans’ benefits, such as veterans’ health care and burial benefits. Army Guard members are required to attend one drill weekend each month and one annual training period (usually 2 weeks in the summer) each year. Initially, all nonprior service personnel are required to attend initial entry training, also known as Basic Training. After Basic Training, soldiers go to their Advanced Individual Training, which teaches them the special skills they will need for their jobs in the Army Guard. This training can usually be scheduled to accommodate civilian job or school constraints. The Army Guard has armories and training facilities in more than 2,800 communities. The Army Guard is a partner with the active Army and the Army Reserves in fulfilling the country's military needs. The National Guard Bureau (NGB) assists the Army Guard in this partnership. NGB is a joint bureau of the Departments of the Army and the Air Force and is charged with overseeing the federal functions of the Army Guard and the Air Guard. In this capacity, NGB helps the Army Guard and the Air Guard procure funding and administer policies. NGB also acts as a liaison between the Departments of the Army and Air Force and the states. All Army forces are integrated under DOD’s “total force” concept. DOD’s total force concept is based on the premise that it is not practically feasible to maintain active duty forces sufficient to meet all possible war contingencies. Under this concept, DOD’s active and reserve components are to be blended into a cohesive total force to meet a given mission. On September 14, 2001, the President declared a national emergency as a result of the terrorist attacks on the World Trade Center and the Pentagon and the continuing and immediate threat of further attacks on the United States. Concurrent with this declaration, the President authorized the Secretary of Defense to call troops to active duty pursuant to 10 U.S.C. Section 12302. The Secretary of Defense delegated to the Secretary of the Army the authority to order Army Guard soldiers to active duty as part of the overall mobilization effort. Approximately 93,000 Army Guard soldiers were activated as of March 2003. At that time, Army Guard soldiers accounted for 34 percent of the total reserve components mobilized in response to the terrorist attacks on September 11, 2001. The active duty federal missions established in response to the September 2001 national emergency were categorized into two operations: Operation Enduring Freedom and Operation Noble Eagle. In general, missions to fight terrorism outside the United States were categorized under Operation Enduring Freedom, while missions to provide domestic defense were categorized as Operation Noble Eagle. For example, Army Guard soldiers participated in direct combat in Afghanistan under Operation Enduring Freedom. U.S. homeland security missions, such as guarding the Pentagon, airports, nuclear power plants, domestic water supplies, bridges, tunnels, and other military assets were conducted under Operation Noble Eagle. The Army Guard also supported federal peacekeeping operations in Southwest Asia with Operation Desert Spring and in Kosovo with Operation Joint Guardian under various other military operations. While on active duty, all Army Guard soldiers earn various statutorily authorized pays and allowances. The types of pay and allowances Army Guard soldiers are eligible to receive vary depending upon rank and length of service, dependency status, skills and certifications acquired, duty location, and the difficulty of the assignment. While Army Guard soldiers mobilized to active duty may be entitled to receive additional pays and allowances, we focused on 14 basic types of pays and allowances applicable to the Army Guard units we selected for case studies. As shown in table 1, we categorized these 14 pay and allowance types into two groups: (1) pays, including basic pay, special duty assignment pay, parachute jumping and foreign language proficiency skill-based pays, and location-based hostile fire and hardship duty pays and (2) allowances, including allowances for housing, subsistence, family separation, and cost of living for the continental United States. In addition, Army Guard soldiers may be eligible for tax advantages associated with their mobilization to active duty. That is, mobilized Army Guard soldiers assigned to or working in a combat zone are entitled to exclude from taxable income certain military pay that would otherwise be taxable. As shown in figure 1, there are three key phases associated with starting and stopping relevant pays and allowances for mobilized Army Guard soldiers: (1) initial mobilization (primarily through the Soldier Readiness Processing), (2) deployment, which includes carrying out assigned mission operations while on active duty, and (3) demobilization. Army Guard units and state-level command support components, as well as active Army finance components and DFAS, have key roles in this process. In addition, there are five key computer systems involved in authorizing, entering, and processing active duty pays to mobilized Army Guard soldiers through the three key phases of their mobilization: Army’s standard order writing system, Automated Fund Control Order System (AFCOS); Army Guard’s personnel system, Standard Installation Division Personnel Reporting System (SIDPERS); Army Guard’s pay input system, JUMPS Standard Terminal Input System active Army’s pay input system, Defense Military Pay Office System (DMO); and DFAS’ Army Guard and Reserve pay system, DJMS-RC. During the initial mobilization, units receive an alert order and begin a mobilization preparation program, Soldier Readiness Processing (SRP). The financial portion of the SRP is conducted by one of the 54 United States Property and Fiscal Offices (USPFO) to verify the accuracy of pay records for each soldier and to make changes to pay records based on appropriate supporting documentation for the pays and allowances that the soldiers will be entitled to receive when initially mobilized. If documentation, such as birth certificates for dependents or parachute jumping certifications, is missing, soldiers have a few days to obtain the necessary documents. The unit commander is responsible for ensuring that all personnel data for each soldier under their command are current. When the unit receives a mobilization order, USPFO pay technicians are responsible for initiating basic pay and allowances by manually entering the start and stop dates into DJMS-RC for the active duty tour that appears on each soldier’s mobilization order. Army Guard pay technicians use JUSTIS to access and record data in DJMS-RC. By entering the soldier’s Social Security number and mobilization order number into JUSTIS, the pay technician can view the pay data in DJMS-RC, ensure that they are complete, and enter any missing data supported by documentation provided by the soldier. If done correctly, soldiers will start to receive basic pay, basic allowances for housing, basic allowances for subsistence, and jump pay automatically based on the start date entered into DJMS-RC. After soldiers complete their initial SRP and receive individual mobilization orders, they travel as a unit to a mobilization station. At the mobilization station, mobilized Army Guard personnel undergo a second SRP review. In this second SRP, mobilization station personnel are responsible for confirming or correcting the results of the first SRP, including making necessary reviews to ensure that each soldier’s records are current. Mobilization pay technicians are required to promptly initiate pays that were not initiated during the first SRP and enter appropriate pay changes into DJMS-RC. The mobilization station commander is required to certify that the unit is ready for mobilization, including ensuring that all authorized active duty pays are in place for the soldiers in the unit, at the end of this process. DJMS-RC will generate certain pays and allowances automatically for each 2-week pay period until the stop date entered in DJMS-RC. If entered correctly, the stop date in DJMS-RC will be the end of active duty tour date documented on the soldier’s mobilization orders. This automated feature is intended to prevent erroneous payments to soldiers beyond their authorized active duty status. However, human intervention is required when a pay or allowance error is detected or an event occurs that requires a change in the soldier’s pay and personnel file. For example, a change in dependent status, such as marriage or divorce, a promotion, jump pay disqualification, or being demobilized before an active duty tour ends would change or eliminate some of the pays and allowances a soldier would be entitled to receive. All pays and allowances and subsequent changes are documented in the Master Military Pay Account (MMPA)—the central pay record repository in DJMS-RC for each soldier. While deployed on active duty, there are several Army Guard (USPFO), active Army, and DFAS components involved in paying mobilized Army Guard personnel. The active Army servicing finance office, which may be within the United States or in a foreign country, is responsible for initiating pays earned while the soldier is deployed, such as hostile fire pay and hardship duty pay. Pay technicians start hostile fire pay for each soldier listed on a battle roster or flight manifest. Thereafter, hostile fire pay is automatically generated each pay period. Other location-based pays, such as hardship duty, require pay transactions each month. The servicing finance office for the deployed phase is under the jurisdiction of the active Army. Active Army servicing finance offices use DMO to enter pay transactions into DJMS-RC. Under certain conditions, either active Army pay servicing offices or USPFOs can process applicable pay-altering transactions, such as those related to a soldier’s early separation from active duty or a soldier’s death. Upon completion of an active duty tour, soldiers normally return to the same Army locations from which they were mobilized for demobilization out-processing before returning to their home units. Demobilization personnel, employed by the active Army or Army Guard, are required to provide each soldier with a Release from Active Duty (REFRAD) order and a Form DD 214, Certificate of Release or Discharge from Active Duty. The demobilization station pay technicians are to use these documents as a basis for deactivating the soldier’s active duty pay and allowances as of the date of release from active duty. At this time, the supporting USPFO is responsible for discontinuing monthly input of all nonautomated pays and allowances. If the demobilization station did not take action to return a soldier to a demobilized status, the state USPFO has this responsibility. In 1995, the Army decided to process pays to mobilized Army Guard soldiers from the DJMS-RC system rather than the active Army payroll system used to pay mobilized Army Guard soldiers previously. According to the then Deputy Assistant Secretary of the Army (Financial Operations), this decision was made as an interim measure (pending the conversion to a single system to pay both active and reserve component soldiers) based on the belief that DJMS-RC provides the best service to the reserve component soldiers. DJMS-RC is a large, complex, and sensitive payroll computer application used to pay Army and Air National Guard and Army and Air Force Reserve personnel. DFAS has primary responsibility for developing guidance and managing operations of the system. DFAS Indianapolis is the central site for all Army military pay and is responsible for maintaining over 1 million MMPAs for the Army. Each MMPA contains a soldier’s pay-related personnel, entitlement, and performance data. All pay-related transactions that are entered into DJMS-RC, through JUSTIS and DMO, update the MMPA. Personnel data contained in the MMPA are generated from SIDPERS—a personnel database maintained and used by the Army Guard at the 54 state-level personnel offices to capture data on personnel-related actions (e.g. discharge, promotion, demotion actions that impact soldiers’ pay). DFAS Denver is responsible for designing, developing, and maintaining customer requirements for the Military and Civilian Pay Services business line, and its Technical Support Office designs and maintains the DJMS-RC core pay software. DFAS-Indianapolis serves as a “gatekeeper” in that it monitors the daily status of data uploaded to DJMS-RC to ensure that all transactions are received and processed in DJMS-RC. Users can sign on to DJMS-RC directly through online interactive software used for file transfer transactions, online queries of MMPAs, and downloads of data files and various DJMS-RC reports. JUSTIS is the pay input subsystem used by the 54 state-level Army Guard commands, including the USPFOs, to update DJMS-RC. Database management of JUSTIS is decentralized in that each of the 54 sites owns and maintains its own JUSTIS database. This subsystem processes transactions for submission to DJMS-RC to create payments for Army National Guard soldiers. JUSTIS receives certain pay-affecting personnel data from SIDPERS. JUSTIS receives a limited amount of mobilization order data directly from AFCOS. These systems share the same operating system platform and certain database tables. However, additional data needed to create pay transactions associated with active duty pay and allowances must be entered manually into JUSTIS from hard copies of mobilization orders. DMO is the pay input subsystem used by active Army finance offices and the DOD military pay offices, including those in overseas locations such as Europe, Korea, and Iraq, to update DJMS-RC. This pay input subsystem can be used by active Army finance offices to create transactions for military pay and allowances that are not reported at the time of mobilization for upload to DJMS-RC and for active Army finance offices to use to enter location-based pays, such as hostile fire and hardship duty pays and combat zone tax exclusion transactions. We found significant pay problems at the six Army Guard units we audited. These problems related to processes, human capital, and systems. The six units we audited, including three special forces and three military police units, were as follows: Special forces units Colorado B Company, 5th Battalion, 19th Special Forces Virginia B Company, 3rd Battalion, 20th Special Forces West Virginia C Company, 2nd Battalion, 19th Special Forces Mississippi 114th Military Police Company California 49th Military Police Headquarters and Headquarters Maryland 200th Military Police Company In addition, we conducted a limited review of the pay experiences of a seventh unit mobilized more recently and deployed to Iraq in April 2003— the Colorado Army Guard’s 220th Military Police Company—to determine the extent to which the pay problems we found in our six case study units persisted. As shown in figure 2, these units were deployed to various locations in the United States and overseas in support of Operations Noble Eagle and Enduring Freedom. These units were deployed to help perform a variety of critical mission operations, including search and destroy missions in Afghanistan against Taliban and al Qaeda forces, guard duty for al Qaeda prisoners in Cuba, providing security at the Pentagon shortly after the September 11, 2001, terrorist attacks, and military convoy security and highway patrols in Iraq. For the six units we audited, we found significant pay problems involving over one million dollars in errors. These problems consisted of underpayments, overpayments, and late payments that occurred during all three phases of Army Guard mobilization to active duty. Overall, for the 18- month period from October 1, 2001, through March 31, 2003, we identified overpayments, underpayments, and late payments at the six case study units estimated at $691,000, $67,000, and $245,000, respectively. In addition, for one unit, these pay problems resulted in largely erroneous debts totaling $1.6 million. Overall, we found that 450 of the 481 soldiers from our case study units had at least one pay problem associated with their mobilization to active duty. Table 2 shows the number of soldiers with at least one pay problem during each of the three phases of active duty mobilization. Due to the lack of supporting documents at the state, unit, and battalion levels, we may not have identified all of the pay problems related to the active duty mobilizations of these units. We have provided documentation for the pay problems we identified to appropriate DOD officials for further research to determine whether additional amounts are owed to the government or the soldiers. The payment problems we identified at the six case study units did not include instances of fraudulent payments, which were a major finding resulting from the further investigation of improper payments found in our 1993 audit of Army military payroll. Nonetheless, we found the inaccurate, late, and missing pays and associated erroneous debts found during our current audit had a profound financial impact on individual soldiers and their families. Some of the pay problems we identified included the following. DOD erroneously billed 34 soldiers in a Colorado National Guard Special Forces unit an average of $48,000 each. Though we first notified DOD of these issues in April and sent a follow-up letter in June 2003, the largely erroneous total debt for these soldiers of about $1.6 million remained unresolved at the end of our audit in September 2003. As a result of confusion over responsibility for entering transactions associated with a Colorado soldier’s promotion, the soldier’s spouse had to obtain a grant from the Colorado National Guard to pay bills while her husband was in Afghanistan. Some soldiers did not receive payments for up to 6 months after mobilization and others still had not received certain payments by the conclusion of our audit work. Ninety-one of 100 members of a Mississippi National Guard military police unit that was deployed to Guantanamo Bay, Cuba, did not receive the correct amount of Hardship Duty Pay. One soldier from the Mississippi unit was paid $9,400 in active duty pay during the 3 months following an early discharge for drug-related charges. Forty-eight of 51 soldiers in a California National Guard military police unit received late payments because the unit armory did not have a copy machine available to make copies of needed pay-related documents. Four Virginia Special Forces soldiers who were injured in Afghanistan and unable to resume their civilian jobs experienced problems in receiving entitled active duty pays and related health care. In some cases, the problems we identified may have distracted these professional soldiers from mission requirements, as they spent considerable time and effort while deployed attempting to address these issues. Further, these problems may adversely affect the Army’s ability to retain these valuable personnel. Appendixes I–VI provide details of the pay experiences of the soldiers at the case study units we audited. Procedural requirements, particularly in light of the potentially hundreds of organizations and thousands of personnel involved, were not well understood or consistently applied with respect to determining (1) the actions required to make timely, accurate active duty pays to mobilized Army Guard soldiers and (2) the component responsible, among Army Guard, active Army, and DFAS, for taking the required actions. Further, we found instances in which existing guidance was out of date—some of which still reflected practices in place in 1991 during Operation Desert Storm. These complex, cumbersome processes, which were developed in piecemeal fashion over a number of years, provide numerous opportunities for control breakdowns. We found that a substantial number of payment errors were caused, at least in part, by unclear procedural requirements for processing active duty pay and allowance entitlements to mobilized Army Guard soldiers. Overall, as shown in figures 3, 4 and 5, we found that an extensive, cumbersome, and labor-intensive process has evolved to pay mobilized Army Guard soldiers for their active duty service. While figures 3, 4 and 5 provide an overview of the process, particularly of the types of DOD organizations involved, they do not fully capture the numbers of different DOD components involved. Specifically, thousands of Army Guard (individual units and state-level organizations), active Army, and DFAS components may be involved in authorizing, processing, and paying mobilized Army Guard soldiers, including an estimated 2,300 local Army Guard home units, unit commanders, and unit administrators that are involved in maintaining up-to-date soldier personnel and related pay records; 54 state-level Army Guard commands, including both USPFOs and state- level personnel offices involved in authorizing and starting active duty pay transactions; active Army finance offices or DOD Military Pay Offices at over 15 mobilization stations across the United States that are involved in processing Army Guard personnel to and from their active duty locations; 28 active Army area servicing finance offices at over 50 locations worldwide that are involved in servicing Army Guard soldiers’ location- based active duty pays; DFAS-Indianapolis—the central site for processing Army Guard soldiers’ active duty pays; DFAS-Denver—the central site for maintaining the pay system used to pay Army Guard soldiers; DFAS-Cleveland—the central site for handling soldier military pay The Army National Guard Financial Services Center—the Army Guard organization responsible for providing guidance, training, and oversight and coordination for active duty pays to Army Guard personnel. Several of these organizations with key roles in payroll payments to mobilized Army Guard soldiers, including DOD, DFAS, Army, and the Army Guard, have issued their own implementing regulations, policies, and procedures. In addition, we found unwritten practices in place at some of the case study locations we audited. Existing written policies and procedures are voluminous—the DOD Financial Management Regulations (FMR) guidance on pay and allowance entitlements alone covers 65 chapters. As a result of their size and continually evolving nature as legal, procedural, and system requirements change, we found that policies and procedures were not well understood or consistently applied across the potentially hundreds of organizations and thousands of personnel involved in paying mobilized Army Guard personnel. These processes have been developed in piecemeal fashion over a number of years to accommodate changing legislative requirements, DOD policies, and the unique operating practices of different DOD organizations and systems involved in these processes. As discussed in the following sections, these extensive and evolving policies and procedures were confusing both across various organizations and personnel involved in their implementation and, more importantly, to the Army Guard soldiers who are the intended beneficiaries. In addition, these cumbersome policies and procedures contributed to the pay errors we identified. We found instances in which unclear procedural requirements for processing active duty pays contributed to erroneous and late pays and allowances to mobilized Army Guard soldiers. For example, we found existing policies and procedural guidance were unclear with respect to the following issues. Amending active duty orders. A significant problem we found at the case study locations we audited concerned procedures that should be followed for amending active duty orders. We found instances at two of our case study locations in which military pay technicians at either a USPFO or an active Army finance office made errors in amending existing orders. These errors resulted in establishing virtually all prior pays made under the original orders as debts. A major contributor to the pay errors we found in this area was that existing procedures did not clearly state how USPFO and active Army finance personnel should modify existing order tour start and stop information in the pay system when necessary without also unintentionally adversely affecting previous pays and allowances. Also, these procedures did not warn USPFO and active Army personnel that using alternative methods will automatically result in an erroneous debt assessment and garnishment of up to two-thirds of the soldier’s pay. We identified over $1 million in largely erroneous debt transactions as a result of breakdowns in this area. At the Colorado Special Forces unit, we found that actions taken by the Colorado USPFO in an attempt to amend 34 soldiers’ orders resulted in reversing the active pay and allowances the soldiers received for 11 of the 12 months they were deployed on active duty in Afghanistan and instead establishing these payments as debts. These 34 soldiers received notice on their Leave and Earnings Statements that they owed the government an average of approximately $48,000 per soldier, for a total largely erroneous debt of $1.6 million. Although we informed DOD of this problem in April 2003, as of the end of our audit fieldwork in September 2003, the problems at the Colorado Special Forces unit had not been resolved. DOD officials did advise us that, as a result of our work, they implemented a software change on September 18, 2003, intended to help avoid such problems in the future. Specifically, we were told new warning messages have been added to JUSTIS that will appear when a transaction is entered to cancel or amend a tour of duty. The new warnings will advise that the transaction will or could result in a collection action and will ask the pay technician to confirm that is their intent. While we did not verify the effectiveness of this change, it has the potential to reduce pay problems associated with errors made in amending orders. Required time frames for processing pay transactions. Written requirements did not exist with respect to the maximum amount of time that should elapse between the receipt by the responsible Army Guard or Army pay office of proper documentation and processing the related pay transaction through the pay system. While some of the locations we audited had established informal processing targets, for example, 3 days, we also found numerous instances in which available documentation indicated lengthy delays in processing pay transactions after pay offices received supporting documentation. These lengthy processing delays resulted in late payroll payments to deployed soldiers. Required monthly reconciliations of pay and personnel data. The case study units lacked specific written requirements for conducting and documenting monthly reconciliations of pay and personnel mismatch reports and unit commanders’ finance reports. Available documentation showed that these controls were either not done or were not done consistently or timely. Because, as discussed later in this report, the processing of Army Guard pay relies on systems that are not integrated or effectively interfaced, these after-the-fact detective controls are critical to detecting and correcting erroneous or fraudulent pays. To be effective, the 54 state-level Army Guard commands must individually reconcile common data elements in all 54 state-operated personnel databases for Army Guard personnel with corresponding DJMS-RC pay records at least monthly. Because of the lack of clarity in existing procedural requirements in this area, we found that several of the locations we visited had established standard but undocumented reconciliation practices. However, at the six case study locations we audited, we found that although all the USPFOs told us they received monthly SIDPERS and DJMS-RC mismatch reports, they did not always fully reconcile and make all necessary system corrections each month. Lacking specific written policies and procedural requirements for such reconciliations, several of the case study locations we audited established a standard, but undocumented, practice of reconciling roughly a third of the common data elements every month, so that all elements were to be reconciled and all necessary corrective actions taken over a 3-month period. However, documentation was not always retained to determine the extent to which these reconciliations were done and if they were done consistently. Our findings are similar to those in reports from Army Guard operational reviews. For example, the results of the most recent reviews at three of the six case study locations we audited showed that state Army Guard personnel were not performing effective reconciliations of pay and personnel record discrepancies each month. One such report concluded, “Failure to reconcile the Personnel/Pay Mismatch listing monthly provides a perfect opportunity to establish fraudulent personnel or pay accounts.” Several of the instances we identified in which soldiers received pay and allowances for many months after their release from active duty likely would have been identified sooner had USPFO military pay personnel investigated the personnel/pay mismatch report discrepancies more frequently. For example, at one case study unit, 34 soldiers received pay for several months past their official discharge dates. Although records were not available to confirm that these overpayments were reported as discrepancies on monthly mismatch reports, the USPFO military pay supervisor told us that at the time the mismatch reports were not being used to identify and correct pay-affecting errors. As discussed later, at another case study unit, a mobilized soldier was released from active duty and discharged from the Army in June 2002, earlier than his planned release date due to alleged involvement in drug- related activities. But, the soldier continuied to receive active duty pay. The soldier’s SIDPERS personnel record on July 2, 2002, to reflect the discharge. According to pay records, the soldier’s pay continued until the USPFO military pay supervisor identified the discrepancy on the September 25, 2002, personnel/pay mismatch report and initiated action that stopped the soldier’s pay effective September 30, 2002. However, because this discrepancy was not identified until late September, the soldier received $9,400 in extra pay following his discharge from the Army. In addition, while as discussed previously, we found a number of instances in which Army Guard soldiers’ active duty pays continued after their demobilization, available documentation showed only one instance in the six case study units we visited in which a reconciliation of the unit commander’s finance report resulted in action to stop improper active duty pay and allowances. Specifically, available documentation shows that an administrative clerk’s review of this report while the unit was mobilized in Guantanamo Bay, Cuba, resulted in action to stop active duty pay and allowances to a soldier who was previously demobilized. However, it is also important to note that while these reconciliations are an important after-the-fact detective control, they are limited because they can only detect situations in which payroll and personnel records do not agree. A number of pay errors we identified resulted from the fact that neither personnel nor pay records were updated. Soldiers returning from deployments earlier than their units. For four of our case study units, we found instances in which Army Guard soldiers’ active duty pays were not stopped at the end of their active duty tours when they were released from active duty earlier than their units. We found procedural guidance did not clearly specify how to carry out assigned responsibilities for soldiers who return from active duty earlier than their units. DFAS-Indianapolis guidance provides only that “the supporting USPFO will be responsible for validating the status of any soldier who does not return to a demobilized status with a unit.” The guidance did not state how the USPFO should be informed that a soldier did not return with his or her unit, or how the USPFO was to take action to validate the status of such soldiers. At one of our case study locations, officials at the USPFO informed us that they became aware that a soldier had returned early from a deployment when the soldier appeared at a weekend drill while his unit was still deployed. Data input and eligibility requirements for housing and family separation allowances. Our audit work at two of our case study locations indicated that procedural guidance was not clear with respect to transaction entry and eligibility requirements for the basic allowance for housing and the family separation allowance, respectively. For example, during our audit work at one of our case study locations, we determined that because of inconsistent interpretations of existing guidance for “dependents” in entering transactions to start paying soldiers’ basic allowance for housing, a number of Maryland soldiers were not paid the correct amount. At another case study location, we found that existing guidance on eligibility determination was misinterpreted so that soldiers were erroneously refused the “single parent soldiers family separation allowance” to which they were entitled. We also found that existing policies and procedures were unclear with respect to organizational responsibilities. Confusion centered principally around pay processing responsibility for Army Guard soldiers as they move from state control to federal control and back again. To be effective, current processes rely on close coordination and communication between state (Army Guard unit and state-level command organizations) and federal (active Army finance locations at mobilization/demobilization stations and at area servicing finance offices). However, we found a significant number of instances in which critical coordination requirements were not clearly defined. Individual Case Illustration: Confusion over Responsibility for Entering Pay Transactions Results in Family Obtaining a Grant to Pay Bills A sergeant incurred pay problems during his mobilization and deployment to Afghanistan in support of Operation Enduring Freedom that caused financial hardship for his family while he was deployed. In this case, the active Army and his state's USPFO were confused as to responsibility for processing pay input transactions associated with a promotion. Specifically, pay input transactions were required for his promotion from a sergeant first class (E-7) to master sergeant (E-8), his demotion back to an E-7, and a second promotion back to an E-8. The end result was the soldier was overpaid during the period of his demotion. DFAS garnished his wages and collected approximately $1,100 of the soldier's salary. These garnishments reduced the soldier's net pay to less than 50 percent of the amount he had been receiving. As a result, the soldier's wife had to obtain a grant of $500 from the Colorado National Guard's Family Support Group to pay bills. DFAS Indianapolis mobilization procedures authorize the Army Guard’s USPFOs and the active Army’s mobilization station and in-theater finance offices to enter transactions for deployed soldiers. However, we found existing guidance did not provide for clear responsibility and accountability between USPFOs and active Army mobilization stations and in-theater servicing finance offices with respect to responsibility for entering transactions while in-theater and terminating payments for soldiers who separate early or who are absent without leave or are confined. For example, at one of our case study locations, we found that this broad authority for entering changes to soldiers’ pay records enabled almost simultaneous attempts by two different pay offices to enter pay transactions into DJMS-RC for the same soldier. As shown in the following illustration, at another case study location we found that, in part because of confusion over responsibility for starting location-based pays, a soldier was required to carry out a dangerous multiday mission to correct these payments. This page is intentionally left blank. Individual Case Illustration: Difficulty in Starting In-Theatre Pays A sergeant with the West Virginia National Guard Special Forces unit was stationed in Uzbekistan with the rest of his unit, which was experiencing numerous pay problems. The sergeant told us that the local finance office in Uzbekistan did not have the systems up and ready, nor available personnel who were familiar with DJMS-RC. According to the sergeant, the active Army finance personnel were only taking care of the active Army soldiers’ pay issues. When pay technicians at the West Virginia USPFO attempted to help take care of some of the West Virginia National Guard soldiers’ pay problems, they were told by personnel at DFAS-Indianapolis not to get involved because the active Army finance offices had primary responsibility for correcting the unit’s pay issues. Eventually, the sergeant was ordered to travel to the finance office at Camp Doha, Kuwait, to get its assistance in fixing the pay problems. As illustrated in the following map. This trip, during which a soldier had to set aside his in-theatre duties to attempt to resolve Army Guard pay issues, proved to be not only a major inconvenience to the sergeant, but was also life-threatening. At Camp Doha (an established finance office), a reserve pay finance unit was sent from the United States to deal with the reserve component soldiers’ pay issues. The sergeant left Uzbekistan for the 4-day trip to Kuwait. He first flew from Uzbekistan to Oman in a C-130 ambulatory aircraft (carrying wounded soldiers). From Oman, he flew to Masirah Island. From Masirah Island he flew to Kuwait International Airport, and from the airport he had a 45-minute drive to Camp Doha. The total travel time was 16 hours. The sergeant delivered a box of supporting documents used to input data into the system. He worked with the finance office personnel at Camp Doha to enter the pertinent data on each member of his battalion into DJMS-RC. After 2 days working at Camp Doha, the sergeant returned to the Kuwait International Airport, flew to Camp Snoopy in Qatar, and from there to Oman. On his flight between Oman and Uzbekistan, the sergeant’s plane took enemy fire and was forced to return to Oman. No injuries were reported. The next day, he left Oman and returned safely to Uzbekistan. While guidance that permits both Army Guard and active Army military pay personnel to enter transactions for mobilized Army Guard soldiers provides flexibility in serving the soldiers, we found indications that it also contributed to soldiers being passed between the active Army and Army Guard servicing locations. For example, at another of our case study locations, we were told that several mobilized soldiers sought help in resolving active duty pay problems from the active Army’s mobilization station finance office at Fort Knox and later the finance office at Fort Campbell. However, officials at those active Army locations directed the soldiers back to the USPFO because they were Army Guard soldiers. We also found procedures were not clear on how to ensure timely processing of active duty medical extensions for injured Army Guard soldiers. Army Regulation 135-381 provides that Army Guard soldiers who are incapacited as a result of injury, illness, or disease that occured while on active duty for more than 30 consecutive days are eligible for continued health benefits. That is, with medical extension status, soldiers are entitled to continue to receive active duty pays, allowances, and medical benefits while under a physician’s care. At the Virginia 20th Special Forces, B Company, 3rd Battalion, we found that four soldiers were eligible for continued active duty pay and associated medical benefits due to injuries incurred as a result of their involvement in Operation Enduring Freedom. Although these injuries precluded them from resuming their civilian jobs, they experienced significant pay problems as well as problems in receiving needed medical care, in part, as a result of the lack of clearly defined implementing procedures in this area. All four soldiers experienced pay disruptions because existing guidance was not clear on actions needed to ensure that these soldiers were retained on active duty medical extensions. One of the soldiers told us, “People did not know who was responsible for what. No one knew who to contact or what paperwork was needed….” As a result, all four have experienced gaps in receiving active duty pay and associated medical benefits while they remained under a physician’s care for injuries received while on their original active duty tour. Individual Case Illustration: Unclear Regulations for Active Duty Medical Extension Four soldiers who were injured while mobilized in Afghanistan for Operation Enduring Freedom told us that customer service was poor and no one was really looking after their interest or even cared about them. These problems resulted in numerous personal and financial difficulties for these soldiers. · “Not having this resolved means that my family has had to make greater sacrifices and it leaves them in an unstable environment. This has caused great stress on my family that may lead to divorce.” · “My orders ran out while awaiting surgery and the care center tried to deny me care. My savings account was reduced to nearly 0 because I was also not getting paid while I waited. I called the Inspector General at Walter Reed and my congressman. My orders were finally cut. In the end, I was discharged 2 weeks before my care should have been completed because the second amendment to my orders never came and I couldn’t afford to wait for them before I went back to work. The whole mess was blamed on the ‘state’ and nothing was ever done to fix it.” · One sergeant was required to stay at Womack, the medical facility at Fort Bragg, North Carolina, while on medical extension. His home was in New Jersey. He had not been home for about 20 months, since his call to active duty. While he was recovering from his injuries, his wife was experiencing a high-risk pregnancy and depended upon her husband’s medical coverage, which was available while he remained in active duty status. Even though she lived in New Jersey, she scheduled her medical appointments near Fort Bragg to be with her husband. The sergeant submitted multiple requests to extend his active duty medical extension status because the paperwork kept getting lost. Lapses in obtaining approvals for continued active duty medical extension status caused the sergeant’s military medical benefits and his active duty pay to be stopped several times. He told us that because of gaps in his medical extension orders, he was denied medical coverage, resulting in three delays in scheduling a surgery. He also told us he received medical bills associated with his wife’s hospitalization for the delivery of their premature baby as a result of these gaps in coverage. We found several instances in which existing DOD and Army regulations and guidance in the pay and allowance area are outdated and conflict with more current legislative and DOD guidance. Some existing guidance reflected pay policies and procedures dating back to Operations Desert Shield and Desert Storm in 1991. While we were able to associate pay problems with only one of these outdated requirements, there is a risk that they may also have caused as yet unidentified pay problems. Further, having out-of-date requirements in current regulations may contribute to confusion and customer service issues. For example, the National Defense Authorization Act for Fiscal Year 1998 replaced the basic allowance for quarters and the variable housing allowance with the basic allowance for housing. However, volume 7A, chapter 27 of the DOD FMR, dated February 2002, still refers to the basic allowance for quarters and the variable housing allowance. The act also replaced foreign duty pay with hardship duty pay. Yet, chapter 8 of Army Regulation 37-104-4 (Military Pay and Allowances Policy and Procedures – Active Component) still refers to foreign duty pay. Further, current DFAS and Army mobilization procedural guidance directs active Army finance units to use transaction codes to start soldiers’ hardship duty pays that are incorrect. Effective December 2001, DOD amended FMR, Volume 7A, chapter 17, to establish a new “designated area” hardship duty pay with rates of $50, $100, or $150 per month, depending on the area. However, DFAS guidance dated December 19, 2002, directed mobilization site finance offices to use transaction codes that resulted in soldiers receiving a prior type of hardship duty pay that was eliminated in the December 2001 revisions. At one of our case study locations, we found that because the active Army finance office followed the outdated DFAS guidance for starting hardship duty pays, 91 of 100 Mississippi military police unit soldiers deployed to Cuba to guard al Qaeda prisoners were paid incorrect amounts of hardship duty pay. In addition, Army Regulation 37-104-4, dated September 1994, which was still in effect at the end of our audit work, provides that mobilized Army Guard soldiers are to be paid through the active Army pay system—the Defense Joint Military Pay System-Active Component (DJMS-AC). This procedure, in effect during the mobilizations to support Operations Desert Shield and Desert Storm, was changed in 1995. Specifically, in 1995, it was agreed that Army Guard personnel would no longer be moved to the active duty pay system, DJMS-AC, when mobilized to active duty, but would remain on the DJMS-RC system. Maintaining such outdated references in current policies may have contributed to confusion by USPFO and active Army finance personnel regarding required actions, particularly in light of the extensive set of policies and procedures now in effect in this area. With respect to human capital, we found weaknesses, including (1) insufficient resources allocated to pay processing, (2) inadequate training related to existing policies and procedures, and (3) poor customer service. The lack of sufficient numbers of well-trained, competent military pay professionals can undermine the effectiveness of even a world-class integrated pay and personnel system. A sufficient number of well-trained military pay staff is particularly crucial given the extensive, cumbersome, and labor-intensive process requirements that have evolved to support active duty pay to Army Guard soldiers. GAO’s Standards for Internal Control in the Federal Government states that effective human capital practices are critical to establishing and maintaining a strong internal control environment. Specifically, management should take steps to ensure that its organization has the appropriate number of employees, and that appropriate human capital practices, including hiring, training, and retention, are in place and effectively operating. Our audit identified concerns with the numbers of knowledgeable personnel dedicated to entering and processing active duty pays and allowances to mobilized Army Guard soldiers. As discussed previously, both active Army and Army Guard military pay personnel play key roles in this area. Army Guard operating procedures provide that the primary responsibility for administering Army Guard soldiers’ pay as they are mobilized to active duty rests with the 54 USPFOs. These USPFOs are responsible for processing pay for drilling reservists along with the additional surge of processing required for initiating active duty pays for mobilized soldiers. Our audit work identified concerns with the human capital resources allocated to this area, primarily with respect to the Army Guard military pay processing at the state-level USPFOs. Specifically, we identified concerns with (1) the number of staff on board in the military pay sections of the USPFOs, (2) the relatively lower grade structure for nonsupervisory personnel in the USPFOs’ military pay sections in comparison with the grades for similar positions in other sections of the USPFO which led to difficulty in recruiting and retaining military pay processing personnel, and (3) as discussed in the following section, few of the military pay technicians on board at the six locations we audited had received formal training on pay eligibility and pay processing requirements for mobilized Army Guard personnel. NGB provides annual authorization for the overall staffing levels for each state. Within these overall staffing authorizations, each state allocates positions to each of the sections within a USPFO, including the military pay section and other sections such as vendor and contract pay. We compared the actual number of personnel on board to the NGB-authorized staffing level for the military pay sections at the case study locations we audited. During our audit period, two of the six case study locations had fewer military pay technicians on board than they were authorized. Officials at several of the six case study units also stated that restrictions on rank/grade at which USPFOs are allowed to hire personnel for their military pay sections made it difficult to recruit and retain employees. For example, a USPFO official told us that retaining personnel in the military pay section of the USPFOs was particularly difficult because similar administrative positions in other sections of the USPFO were typically higher paying and provided better benefits than the positions in the military pay section. The highest pay grade of the nonsupervisory pay technicians at the six case study units was a GS-7, and the majority of personnel were in the GS-6 pay grade. Although the Army and DFAS have established an agreement that in part seeks to ensure that resources are available to provide appropriately skilled pay personnel at mobilization stations to support surge processing, no such contingency staffing plan exists for the USPFOs. Specifically, a November 2002 memorandum of understanding between the Army and DFAS states that the active Army has primary responsibility to provide trained military or civilian resources to execute active duty pay and allowance surge processing requirements. However, this memorandum does not address the resources needed for surge processing at USPFOs. As discussed previously, pay problems at the case study units were caused in part by USPFO military pay sections attempting to process large numbers of pay transactions without sufficient numbers of knowledgeable personnel. Lacking sufficient numbers of personnel undermines the ability of the USPFO pay functions to carry out established control procedures. For example, our audits at several of the six case study units showed that there were no independent reviews of proposed pay transactions before they were submitted to DJMS-RC for processing. Such independent supervisory reviews are required by DJMS-RC operating procedures. However, a USPFO official told us that because of the limited number of pay technicians available to process pay transactions—particularly when processing massive numbers of transactions to start active duty pays at the same time—this requirement was often not followed. The Chief of Payroll at one of our case study locations told us that because they were currently understaffed, staff members worked 12 to 14 hours a day and still had backlogs of pay start transactions to be entered into the pay system. We were also told that two of our other case study locations experienced backlogs and errors in entering pay start transactions when they were processing large numbers of Army Guard soldiers during initial mobilizations. Military pay personnel told us that they were able to avoid backlogs in processing pay start transactions during mobilization processing by conscripting personnel from other USPFO sections to help in assembling and organizing the extensive paperwork associated with activating appropriate basic pays, entitlements, and special incentive pays for their mobilized Army Guard soldiers. In addition to concerns about the numbers of personnel onboard at the USPFO military pay offices involved in processing pay transactions for our case study units, we identified instances in which the personnel at military pay offices at both the USPFOs and the active Army finance offices did not appear to know the different aspects of the extensive pay eligibility or payroll processing requirements used to provide accurate and timely pays to Army Guard soldiers. There are no DOD or Army requirements for military pay personnel to receive training on pay entitlements and processing requirements associated with mobilized Army Guard soldiers or for monitoring the extent to which personnel have taken either of the recently established training courses in the area. Such training is critical given that military pay personnel must be knowledgeable about the extensive and complex pay eligibility and processing requirements. We also found that such training is particularly important for active Army pay personnel who may have extensive experience and knowledge of pay processing requirements for regular Army soldiers, but may not be well versed in the unique procedures and pay transaction entry requirements for Army Guard soldiers. During our work at the case study units, we identified numerous instances in which military pay technicians at both the USPFOs and active Army finance office locations made data coding errors when entering transaction codes into the pay systems. We were told that these errors occurred because military pay personnel—particularly those at the active Army finance office locations—were unfamiliar with the system’s pay processing requirements for active duty pays to mobilized Army Guard personnel. Correcting these erroneous transactions required additional labor-intensive research and data entry by other more skilled pay technicians. As discussed previously, we also found that pay technicians did not understand how to properly code data on the soldiers’ dependents status, which is used to determine housing allowances, into the pay system. As a result, we identified cases in which soldiers were underpaid housing allowances to which they were entitled. Personnel at active Army finance offices told us that while they are readily familiar with the pay processing requirements for active Army personnel (using DJMS-AC), they had little experience with, or training in, the policies and procedures to be followed in entering pay transactions into DJMS-RC. An Army finance office official told us that handling two sets of pay transaction processing procedures is often confusing because they are often required to process a large number of both active Army personnel and Army Guard and other reserve personnel using different processes and systems at the same time. While the Army Guard offers training for their military pay technicians, we found that there was no overall monitoring of Army Guard pay personnel training. At several of the case study locations we audited, we found that Army Guard pay technicians relied primarily on on-the-job-training and phone calls to the Army Guard Financial Services Center in Indianapolis or to other military pay technicians at other locations to determine how to process active duty pays to activated Army Guard personnel. Beginning in fiscal year 2002, the Army Guard began offering training on mobilization pays and transaction processing to the USPFO military pay technicians. However, there is no requirement for USPFO pay technicians to attend these training courses. In addition, available documentation showed that two of the five scheduled courses for fiscal year 2003 were canceled—one because of low registration and one because of schedule conflicts. Only two of the six case study locations we audited tracked the extent to which pay technicians have taken training in this area. We were told that few of the military pay technicians at the state Army Guard USPFOs we audited had formal training on JUSTIS, DJMS-RC, or mobilization pay processing requirements and procedures. Throughout our case studies, we found numerous errors that involved some element of human capital. One payroll clerk told us that she had not received any formal training on how to operate JUSTIS when she was assigned to the job. Instead, she stated, she has learned how to operate the system through on-the-job training and many phone calls to system support personnel in Indianapolis. She estimated that she was not fully comfortable with all the required transaction processing procedures until she had been on the job for about 7 years. In addition, unit commanders have significant responsibilities for establishing and maintaining the accuracy of solders’ pay records. U.S. Army Forces Command Regulation 500-3-3, Reserve Component Unit Commander’s Handbook (July 15, 1999) requires unit commanders to (1) annually review and update pay records for all soldiers under their command as part of an annual soldier readiness review and (2) obtain and submit supporting documentation needed to start entitled active duty pay and allowances based on mobilization orders. However, we saw little evidence the commanders of our case study units carried out these requirements. Further, neither Army Guard unit commanders nor active Army commanders were required to receive training on the importance of the pay to on-board personnel reconciliations, discussed previously, as an after-the-fact detective control to proactively identify Army Guard soldiers who should no longer receive active duty pays. We were told that this was primarily because unit commanders have many such administrative duties, and without additional training on the importance of these actions, they may not receive sufficient priority attention. The lack of unit commander training on the importance of these requirements may have contributed to the pay problems we identified at our case study units. For example, at our Virginia case study location, we found that when the unit was first mobilized, USPFO pay personnel were required to spend considerable time and effort to correct hundreds of errors in the unit’s pay records dating back to 1996. Such errors could have been identified and corrected during the preceding years’ readiness reviews. Further, we observed many cases in which active duty pays were not started until more than 30 days after the entitled start dates because soldiers did not submit the paperwork necessary to start these pays. Customer Service Concerns Through data collected directly from selected soldiers and work at our six case study locations, we identified a recurring soldier concern with the level and quality of customer service they received associated with their pays and allowances when mobilized to active duty. None of the DOD, Army, or Army Guard policies and procedures we examined addressed the level or quality of customer service that mobilized Army Guard soldiers should be provided concerning questions or problems with their active duty pays. However, we identified several sources that soldiers may go to for customer service or information on any such issues. These include the military pay section of the USPFO of their home state’s Army Guard, the designated active Army area servicing finance office, and a toll free number, 1-888-729-2769 (Pay Army). While soldiers had multiple sources from which they could obtain service, we found indications that many Army Guard soldiers were displeased with the customer service they received. We found that not all Army Guard soldiers and their families were informed at the beginning of their mobilization of the pays and allowances they should receive while on active duty. This information is critical for enabling soldiers to identify if they were not receiving such pays and therefore require customer service. In addition, as discussed later in this report, we found that the documentation provided to Army Guard soldiers—primarily in the form of leave and earnings statements—concerning the pays and allowances they received did not facilitate customer service. Our audit identified customer service concerns at all three phases of the active duty tours and involving DFAS, active Army, and Army Guard servicing components. Consistent with the confusion we found among Army Guard and active Army finance components concerning responsibility for processing pay transactions for mobilized Army Guard soldiers, we found indications that the soldiers themselves were similarly confused. Many of the complaints we identified concerned confusion over whether Army Guard personnel mobilized to active duty should be served by the USPFO because they were Army Guard soldiers or by the active Army because they were mobilized to federal service. One soldier told us that he submitted documentation on three separate occasions to support the housing allowance he should have received as of the beginning of his October 2001 mobilization. Each time he was told to resubmit the documentation because his previously submitted documents were lost. Subsequently, while he was deployed, he made additional repeated inquiries as to when he would receive his housing allowance pay. He was told that it would be taken care of when he returned from his deployment. However, when he returned from his deployment, he was told that he should have taken care of this issue while he was deployed and that it was now too late to receive this allowance. Data collected from Army Guard units mobilized to active duty indicated that some members of the units had concerns with the pay support customer service they received associated with their mobilization— particularly with respect to pay issues associated with their demobilization. Specifically, of the 43 soldiers responding to our question on satisfaction with customer support during the mobilization phase, 10 indicated satisfaction, while 15 reported dissatisfaction. In addition, of the 45 soldiers responding to our question on customer support following demobilization, 5 indicated satisfaction while 29 indicated dissatisfaction. Of the soldiers who provided written comments about customer service, none provided any positive comments about the customer service they received, and several had negative comments about the customer service they received, including such comments as “nonexistent,” “hostile,” or “poor.” For example, a company commander for one of our case study units told us that he was frustrated with the level of customer support his unit received during the initial mobilization process. Only two knowledgeable military pay officials were present to support active duty pay transaction processing for the 51 soldiers mobilized for his unit. He characterized the customer service his unit received at initial mobilization as time consuming and frustrating. Personnel we talked with at the Colorado special forces unit we audited were particularly critical of the customer service they received both while deployed in Afghanistan and when they were demobilized from active duty. Specifically, unit officials expressed frustration with being routed from one office to another in their attempts to resolve problems with their active duty pays and allowances. For example, the unit administrator told us he contacted the servicing area active Army finance office for the 101st Airborne in West Virginia because his unit was attached to the 101st when they were deployed. The finance office instructed him to contact the USPFO in West Virginia because, although he was from a Colorado unit, his unit was assigned to a West Virginia Army Guard unit. However, when he contacted the West Virginia USPFO for service, officials from that office instructed him to contact the USPFO in his home state of Colorado to provide service for his pay problems. Several systems issues were significant factors impeding accurate and timely payroll payments to mobilized Army Guard soldiers, including the lack of an integrated or effectively interfaced pay system with both the personnel and order-writing systems, limitations in DJMS-RC processing capabilities, and ineffective system edits of payments and debts. DOD has a significant system enhancement project under way to improve military pay. However, given that the effort has been under way for about 5 years and DOD has encountered challenges fielding the system, it is likely that the department will continue to operate with existing system constraints for at least several more years. Our findings related to weaknesses in the systems environment were consistent with issues raised by DOD in its June 2002 report to the Congress on its efforts to implement an integrated military pay and personnel system. Specifically, DOD’s report acknowledged that major deficiencies in the delivery of military personnel and pay services to ensure soldiers receive timely and accurate personnel and pay support must be addressed by the envisioned system. Further, the report indicated these deficiencies were the direct result of the inability of a myriad of current systems with multiple, complex interfaces to fully support current business process requirements. Figure 6 provides an overview of the five systems currently involved in processing Army Guard pay and personnel information. The five key DOD systems involved in authorizing, entering, processing, and paying mobilized Army Guard soldiers were not integrated. Lacking either an integrated or effectively interfaced set of personnel and pay systems, DOD must rely on error-prone, manual entry of data from the same source documents into multiple systems. With an effectively integrated system, changes to personnel records automatically update related payroll records from a single source of data input. While not as efficient as an integrated system, an automatic personnel-to-payroll system interface can also reduce errors caused by independent, manual entry of data from the same source documents into both pay and personnel systems. Without an effective interface between the personnel and pay systems, we found instances in which pay-affecting information did not get entered into both the personnel and pay systems, thus causing various pay problems—particularly late payments. We found that an existing interface could be used to help alert military pay personnel to take action when mobilization transactions are entered into the personnel system. Specifically, Army Guard state personnel offices used an existing interface between SIDPERS and JUSTIS to transmit data on certain personnel transactions (i.e., transfers, promotions, demotions, and address changes) to the 54 USPFOs to update the soldier’s pay records. However, this personnel-to-pay interface (1) requires manual review and acceptance by USPFO pay personnel of the transactions created in SIDPERS and (2) does not create pay and allowance transactions to update a soldier’s pay records. For example, when Army Guard soldiers change from inactive drilling status to active duty status, state personnel offices create personnel-related transactions in SIDPERS, but associated pay- related transactions to update the soldier’s pay records are not automatically created in JUSTIS. USPFO pay personnel are not aware that a pay-related transaction is needed until they receive documentation from the soldier, the soldier’s unit commander, or the monthly personnel/pay mismatch report. Automated improvements, such as an administrative action transmitted through the personnel-to-payroll interface, could be used to proactively alert USPFOs of certain pay-impacting transactions that are created in SIDPERS as a means to help ensure timely and accurate pay. In our case studies, we found instances in which mobilization order data that were entered into SIDPERS were either not entered into DJMS-RC for several months after the personnel action or were entered inconsistently. At the case study locations we audited, we found several instances in which Army Guard soldiers received amended or revoked orders that were entered into SIDPERS but were not entered into DJMS-RC. We also found instances in which personnel pay-affecting changes such as changes in family separation allowance, basic allowance for housing, and active duty pay increases from promotions, were not entered into the pay system promptly. Consequently, these soldiers either received active duty pays they were not entitled to receive—some for several months—or did not timely receive active duty pays to which they were entitled. Individual Case Illustration: Overpayment due to Lack of Integrated Pay and Personnel Systems A soldier with the Mississippi Army National Guard was mobilized in January 2002 with his unit and traveled to the mobilization station at Fort Campbell. The unit stayed at Fort Campbell to perform post security duties until June 2002. On June 14, 2002, the E-4 specialist received a "general" discharge order from the personnel office at Fort Campbell for a drug-related offense. However, he continued to receive active duty pay, totaling approximately $9,400, until September 2002. Although the discharge information was promptly entered into the soldier's personnel records, it was not entered into the pay system for almost 4 months. This problem was caused by weaknesses in the processes designed to work around the lack of integrated pay and personnel systems. Further, the problem was not detected because reconciliations of pay and personnel data were not performed timely. Specifically, it was not until over 3 months after the soldier's discharge, through its September 2002 end-of-month reconciliation, that the Mississippi Army National Guard USPFO identified the overpayment and took action on October 2, 2002, to stop the individual's pay. However, collection efforts on the $9,400 overpayment did not begin until July 2003, when we pointed out this situation to USPFO officials. The lack of an integrated set of systems was also apparent in the relationship between JUSTIS and the order writing system—AFCOS. Currently, certain personnel and order information entered and stored in the AFCOS database is automatically filled in the JUSTIS input screens pertaining to active duty tours for state missions upon entry of the soldier’s Social Security Number and order number. This auto-fill functionality eliminates the need for some error-prone, manual reentry of data into JUSTIS. However, currently, manual entry of data from a hard copy of the soldier’s orders and other documentation is required to initiate the soldier’s pay and allowances—a procedure that defeats the purpose of an effective interface. For example, at one of the case study units we audited, USPFO pay personnel had to manually enter the soldier’s active duty tour start and stop dates into JUSTIS from a hard copy of the actual mobilization order. When we brought this to the attention of NGB officials, they stated that providing the auto-fill functionality to the mobilization input screens would require minimal programming changes. NGB officials stated that they planned to release a programming software change to all 54 USPFOs that would allow the start and stop dates to be automatically filled into the mobilization screens to reduce the need for reentry of some mobilization information. Because this software change was scheduled to occur after the conclusion of our fieldwork, we did not verify its effectiveness. In any case, while this proposed programming change may be beneficial, it does not eliminate the need for manual entry and review of certain other mobilization data needed to initiate a soldier’s basic pay and allowances. DOD has acknowledged that DJMS-RC is an aging, COBOL/mainframe- based system. Consequently, it is not surprising that we found DFAS established a number of “workarounds”—procedures to compensate for existing DJMS-RC processing limitations with respect to processing active duty pays and allowances to mobilized Army guard soldiers. Such manual workarounds are inefficient and create additional labor-intensive, error- prone transaction processing. We observed a number of such system workaround procedures at the case study units we audited. For example, for the special forces units we audited, our analysis disclosed a workaround used to exclude soldiers’ pay from federal taxes while in combat. Specifically, DJMS-RC was not designed to make active duty pays and exclude federal taxes applicable to those pays in a single pay transaction. To compensate for this system constraint, DFAS established a workaround that requires two payment transactions over a 2-month payroll cycle to properly exempt soldiers’ pay for the combat zone tax exclusion. That is, for those soldiers entitled to this exclusion, DJMS-RC withholds federal taxes the first month, identifies the taxes to be refunded during end- of-month pay processing, and then makes a separate payment during the first pay update the following month to refund the taxes that should not have been withheld. Soldiers’ taxes could not be refunded the same month because the DJMS-RC refund process occurs only one time a month. In addition, because of limited DJMS-RC processing capabilities, the Army Guard USPFO and in-theatre active Army area servicing finance office pay technicians are required to manually enter transactions for nonautomated pay and allowances every month. DJMS-RC was originally designed to process payroll payments to Army Reserve and Army Guard personnel on weekend drills or on short periods of annual active duty (periods of less than 30 days in duration) or for training. With Army Guard personnel now being paid from DJMS-RC for extended periods of active duty (as long as 2 years at a time), DFAS officials told us that the system is now stretched because it is being used to make payments and allowances that it was not structured or designed to make, such as hostile fire pay and the combat zone tax exclusion. Many of these active duty pay and allowances require manual, monthly verification and reentry into DJMS-RC because, while some pays, such as basic active duty pay and jump pay, can be generated automatically, DJMS-RC is not programmed to generate automatic payment of certain other types of pay and allowances. For example, each month USPFO pay personnel are responsible for entering into JUSTIS special duty assignment pay, foreign language proficiency pay, and high altitude low opening (HALO) pay, and Army area servicing finance offices are responsible for entering into DMO hardship duty pay, for deployed soldiers entitled to these types of pays and for which a performance certification is received from the respective unit commanders. However, because pay transactions must be manually entered every month soldiers are entitled to receive these pays, it is often difficult to ensure that mobilized soldiers receive their entitled nonautomated pays and allowances. For example, we found a number of instances in which soldiers were underpaid their entitled jump, foreign language proficiency, special duty assignment, or hardship duty pays because pay technicians inadvertently omitted the monthly manual input required to initiate these types of pays every month. At one of the case study units, we found USPFO pay personnel had a procedure in place to help prevent inadvertently omitting month-to-month entry of nonautomated pays for entitled soldiers. Specifically, pay personnel at the USPFO in Maryland used a warning screen within JUSTIS as a mechanism to alert them that soldiers were eligible to receive that particular pay component that month. Although this does not alleviate the problem of month-to-month manual entry, the warning screen could be used to help preclude some of the pay problems we found resulting from failures to enter transactions for nonautomated, month-to-month pay and allowance entitlements. Further, these month-to-month pays and allowances were not separately itemized on the soldiers’ leave and earnings statements in a user-friendly format. In contrast, at four of our six case study units, we found that a significant number of soldiers were overpaid their entitled automated pays when they were demobilized from active duty before the stop date specified in their original mobilization orders. This occurred because pay technicians did not update the stop date in DJMS-RC, which is necessary to terminate the automated active duty pays when soldiers leave active duty early. For example, the military finance office in Kuwait, which was responsible for paying Virginia 20th Special Forces soldiers in the fall of 2002, did not stop hostile fire and hardship duty pays as required when these soldiers left Afghanistan in October 2002. We found that 55 of 64 soldiers eligible for hostile fire pay were overpaid for at least 1 month beyond their departure from Afghanistan. Individual Case Illustration: Problems in Deciphering a Leave and Earnings Statement An Army National Guard Special Forces sergeant believed that he was not receiving certain active duty pays and allowances during his mobilization to active duty in support of Operation Enduring Freedom. On March 23, 2002, the sergeant wrote a letter from Afghanistan to a fellow battalion soldier back in his home state, discussing his pay problems. The sergeant stated that he was not receiving his special duty assignment pay from November 2001 to March 2002. The sergeant’s letter also stated he was not receiving his hostile fire pay and combat zone tax exclusion. His letter concluded, “Are they really fixing pay issues or are they putting them off till we return? If they are waiting, then what happens to those who (god forbid) don’t make it back?” The sergeant was killed in action in Afghanistan on April 15, 2002, before he knew if his pay problems were resolved. Our review determined that some of the sergeant’s pays were started up to 2 months late, but others had actually been paid properly. The sergeant apparently was not aware of receiving these payments because of the way they were combined. Soldiers’ pays may appear as lump sum payments under “other credits” on their leave and earnings statements. In many cases these other credit pay and allowances appeared on their leave and earning statements without adequate explanation. As a result, we found indications that Army Guard soldiers had difficulty using the leave and earnings statements to determine if they received all entitled active duty pays and allowances. In addition, several Army Guard soldiers told us that they had difficulty discerning from their leave and earnings statements whether lump sum catch-up payments fully compensated them for previously underpaid active duty pay and allowance entitlements. Without such basic customer service, the soldiers cannot readily verify that they received all the active duty pays and allowances to which they were entitled. As shown in the example leave and earnings statement extract included in figure 7, an Army Guard soldier who received a series of corrections to special duty assignment pay along with a current special duty assignment payment of $110 is likely to have difficulty discerning whether he or she received all and only entitled active duty pays and allowances. While DJMS-RC has several effective edits to prevent certain overpayments, it lacks effective edits to reject large proposed net pays over $4,000 at midmonth and over $7,000 at end-of-month before their final processing. DOD established these thresholds to monitor and detect abnormally large payments. As a result of the weaknesses we identified, we found several instances in our case studies in which soldiers received large lump sum payments, probably related to previous underpayments or other pay errors, with no explanation. Further, the lack of preventive controls over large payments poses an increased risk of fraudulent payments. DJMS-RC does have edits that prevent soldiers from (1) being paid for pay and allowances beyond the stop date for the active duty tour, (2) being paid for more than one tour with overlapping dates, or (3) being paid twice during a pay period. Each month, DFAS Indianapolis pay personnel receive an Electronic Fund Transfer Excess Dollar Listing after the electronic fund transfer payment has been processed in DJMS-RC and deposited to the soldier’s bank account. DJMS-RC does not contain edit checks to reject payments over the threshold amounts or to require review and approval of payments over these amounts prior to their final processing. For example, at one of the case study units we audited, DJMS-RC did not have edit checks to prevent one soldier from receiving an erroneous electronic payment totaling $20,110 without prior approval (see the individual case illustration below for details). In addition, our analysis showed 76 other payroll-related payments during the period October 1, 2001, through March 31, 2003, of over $7,000 (net) each that were paid by DJMS-RC. Because the Electronic Fund Transfer Excess Dollar Listing is printed after the payment is made, timely detection of errors is critical to help ensure that erroneous payments are recovered and that fraud does not occur. Similarly, DJMS-RC does not have system edits to prevent large debts from being assessed without review and approval prior to being processed and does not provide adequate explanations for pay-related debt assessments. Our case studies identified individuals who received debt notices in excess of $30,000 with no explanation. At five of the 6 units audited, we identified 86 individuals who had total pay and allowance debts of approximately $300,000 as of March 31, 2003. Individual Case Illustration: System Edits Do Not Prevent Large Payments and Debts A sergeant with the Colorado Army National Guard, Special Forces, encountered numerous severe pay problems associated with his mobilization to active duty, including his deployment to Afghanistan in support of Operation Enduring Freedom. The sergeant’s active duty pay and other pay and allowances should have been stopped on December 4, 2002, when he was released from active duty. However, because the sergeant’s mobilization orders called him to active duty for 730 days and not the 365 days that he was actually mobilized, and the Army area servicing finance office at the demobilization station, Fort Campbell, did not enter the release from active duty date into DJMS-RC, the sergeant continued to improperly receive payments, as if he were still on active duty, for 2 and a half months after he was released from active duty totaling over $8,000. The sergeant was one of 34 soldiers in the company whose pay continued after their release from active duty. In an attempt to stop the erroneous payments, in February 2003, pay personnel at the Colorado USPFO created a transaction to cancel the tour instead of processing an adjustment to amend the stop date consistent with the date on the Release from Active Duty Order. When this occurred, DJMS-RC automatically processed a reversal of 11 months of the sergeant’s pay and allowances that he earned while mobilized from March 1, 2002, through February 4, 2003, which created a debt in the amount of $39,699 on the soldier’s pay record; however, the reversal should have only been from December 5, 2002, through February 4, 2003. In April 2003, at our request, DFAS-Indianapolis personnel intervened in an attempt to correct the large debt and to determine the actual amount the sergeant owed. In May 2003, DFAS-Indianapolis erroneously processed a payment transaction instead of a debt correction transaction in DJMS-RC. This created a payment of $20,111, which was electronically deposited to the sergeant’s bank account without explanation, while a debt of $30,454 still appeared on his Leave and Earnings Statement. About 9 months after his demobilization, the sergeant’s unpaid debt balance was reportedly $26,559, but the actual amount of his debt had not yet been determined as of September 2003. In addition, we found that current procedures used to notify soldiers of large payroll-related debts did not facilitate customer service. Under current procedures, if a soldier is determined to owe the government money while on active duty, he is assessed a debt and informed of this assessment with a notation of an “Unpaid Debt Balance” in the remarks section of his Leave and Earnings Statement. A soldier at one of our case study units told us that he was not notified in advance of his receipt of his Leave and Earnings Statement that he had a debt assessment and that two- thirds of his pay would be garnished. As a result, he was not able to plan his financial affairs to avoid late payments on his car and other loans. This debt assessment notification procedure is even more egregious when debts, particularly large debts, are assessed in error and up to two-thirds of the soldier’s pay may be garnished to begin repaying the erroneous debt. For example, at our case study units, we found that the only notice several soldiers received when they were erroneously assessed payroll debts was an “Unpaid Debt Balance” buried in the remarks section of their Leave and Earnings Statements. One such assessment showing a $39,489.28 debt is shown in figure 8. DOD has a major system enhancement effort under way in this area described as the largest personnel and pay system in the world in both scope and number of people served—the Defense Integrated Military Human Resources System (DIMHRS). One of the major benefits expected with DIMHRS is “service members receiving accurate and timely pay and benefits.” Begun in 1998, DIMHRS is ultimately intended to replace more than 80 legacy systems (including DJMS-RC) and integrate all pay, personnel, training, and manpower functions across the department by 2007. By the end of fiscal year 2003, DOD reporting shows that it will have invested over 5 years and about $360 million in conceptualizing and planning the system. In 2002, DOD estimated that integrated personnel and pay functions of DIMHRS would be fully deployed by fiscal year 2007. It also reported a development cost of about $427 million. However, our review of the fiscal year 2004 DOD Information Technology budget request shows that DOD is requesting $122 million and $95 million, respectively, for fiscal years 2004 and 2005. In addition, the department reported that the original DIMHRS project completion milestone date has slipped about 15 months. Part of the requested funding for fiscal year 2004 was to acquire a payroll module, Forward Compatible Payroll. According to program officials, this module, in conjunction with a translation module and a Web services component, is to replace DJMS-RC and DJMS-AC systems by March 2006, with the first deployment to the Army Reserve and Army Guard in March 2005. In assessing the risks associated with DIMHRS implementation as part of its fiscal year 2004 budget package, DOD highlighted 20 such risks. For example, DOD reported a 60 percent risk associated with “Service issues with business process reengineering and data migration.” The department’s ability to effectively mitigate such risks is of particular concern given its poor track record in successfully designing and implementing major systems in the past. Consequently, given the schedule slippages that have already occurred combined with the many risks associated with DIMHRS implementation, Army Guard soldiers will likely be required to rely on existing pay systems for at least several more years. Our limited review of the pay experiences of the soldiers in the Colorado Army Guard’s 220th Military Police Company, which was mobilized to active duty in January 2003, sent to Kuwait in February 2003, and deployed to Iraq on military convoy security and highway patrol duties in April 2003, indicated that some of the same types of pay problems that we found in our six case study units continued to occur. Of the 152 soldiers mobilized in this unit, we identified 54 soldiers who our review of available records indicated were either overpaid, underpaid, or received entitled active duty pays and allowances over 30 days late, or for whom erroneous pay-related debts were created. We found that these pay problems could be attributed to control breakdowns similar to those we found at our case study units, including pay system input errors associated with amended orders, delays and errors in coding pay and allowance transactions, and slow customer service response. For example, available documentation and interviews indicate that while several soldiers submitted required supporting documentation to start certain pays and allowances at the time of their initial mobilization in January 2003, over 20 soldiers were still not receiving these pays in August 2003. Colorado USPFO military pay-processing personnel told us they are reviewing pay records for all deployed soldiers from this unit to ensure that they are receiving all entitled active duty pays and allowances. The extensive problems we identified at the case study units vividly demonstrate that the controls currently relied on to pay mobilized Army Guard personnel are not working and cannot provide reasonable assurance that such pays are accurate or timely. The personal toll that these pay problems have had on mobilized soldiers and their families cannot be readily measured, but clearly may have a profound effect on reenlistment and retention. It is not surprising that cumbersome and complex processes and ineffective human capital strategies, combined with the use of an outdated system that was not designed to handle the intricacies of active duty pay and allowances, would result in significant pay problems. While it is likely that DOD will be required to rely on existing systems for a number of years, a complete and lasting solution to the pay problems we identified will only be achieved through a complete reengineering, not only of the automated systems, but also of the supporting processes and human capital practices in this area. However, immediate actions can be taken in these areas to improve the timeliness and accuracy of pay and allowance payments to activated Army Guard soldiers. The need for such actions is increasingly imperative in light of the current extended deployment of Army Guard soldiers in their crucial role in Operation Iraqi Freedom and anticipated additional mobilizations in support of this operation and the global war on terrorism. Immediate steps to at least mitigate the most serious of the problems we identified are needed to help ensure that the Army Guard can continue to successfully fulfill its vital role in our national defense. We recommend that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to take the following actions to address the issues we found with respect to the existing processes, human capital, and automated systems relied on to pay activated Army Guard personnel. Establish a unified set of policies and procedures for all Army Guard, Army, and DFAS personnel to follow for ensuring active duty pays for Army Guard personnel mobilized to active duty. Establish performance measures for obtaining supporting documentation and processing pay transactions (for example, no more than 5 days would seem reasonable). Establish who is accountable for stopping active duty pays for soldiers who return home earlier than their units. Clarify the policies and procedures for how to properly amend active duty orders, including medical extensions. Require Army Guard commands and unit commanders to carry out complete monthly pay and personnel records reconciliations and take necessary actions to correct any pay and personnel record mismatches found each month. Update policies and procedures to reflect current legal and DOD administrative requirements with respect to active duty pays and allowances and transaction processing requirements for mobilized Army Guard soldiers. Consider expanding the scope of the existing memorandum of understanding between DFAS and the Army concerning the provision of resources to support surge processing at mobilization and demobilization sites to include providing additional resources to support surge processing for pay start and stop transaction requirements at Army Guard home stations during initial soldier readiness programs. Determine whether issues concerning resource allocations for the military pay operations identified at our case study units exist at all 54 USPFOs, and if so, take appropriate actions to address these issues. Determine whether issues concerning relatively low-graded military pay technicians identified at our case study units exist at all 54 USPFOs, and if so, take appropriate actions to address these issues. Modify existing training policies and procedures to require all USPFO and active Army pay and finance personnel responsible for entering pay transactions for mobilized Army Guard soldiers to receive appropriate training upon assuming such duties. Require unit commanders to receive training on the importance of adhering to requirements to conduct annual pay support documentation reviews and carry out monthly reconciliations. Establish an ongoing mechanism to monitor the quality and completion of training for both pay and finance personnel and unit commanders. Identify and evaluate options for improving customer service provided to mobilized Army Guard soldiers by providing improved procedures for informing soldiers of their pay and allowance entitlements throughout their active duty mobilizations. Identify and evaluate options for improving customer service provided to mobilized Army Guard soldiers to ensure a single, well-advertised source for soldiers and their families to access for customer service for any pay problems. Review the pay problems we identified at our six case study units to identify and resolve any outstanding pay issues for the affected soldiers. Evaluate the feasibility of using the personnel-to-pay interface as a means to proactively alert pay personnel of actions needed to start entitled active duty pays and allowances. Evaluate the feasibility of automating some or all of the current manual monthly pays, including special duty assignment pay, foreign language proficiency pay, hardship duty pay, and HALO pay. Evaluate the feasibility of eliminating the use of the “other credits” for processing hardship duty (designated areas), HALO pay, and special duty assignment pay, and instead establish a separate component of pay for each type of pay. Evaluate the feasibility of using the JUSTIS warning screen to help eliminate inadvertent omissions of required monthly manual pay inputs. Evaluate the feasibility of redesigning Leave and Earnings Statements to provide soldiers with a clear explanation of all pay and allowances received so that they can readily determine if they received all and only entitled pays. Evaluate the feasibility of establishing an edit check and requiring approval before processing any debt assessments above a specified dollar amount. Evaluate the feasibility of establishing an edit check and requiring approval before processing any payments above a specified dollar amount. As part of the effort currently under way to reform DOD’s pay and personnel systems—referred to as DIMHRS—incorporate a complete understanding of the Army Guard pay problems as documented in this report into the requirements development for this system. In developing DIMHRS, consider a complete reengineering of the processes and controls and ensure that this reengineering effort deals not only with the systems aspect of the problems we identified, but also with the human capital and process aspects. In its written comments, DOD concurred with our recommendations and identified actions to address the identified deficiencies. Speciifically, DOD’s response outlined some actions already taken, others that are underway, and further planned actions with respect to our recommendations. If effectively implemented, these actions should substantially resolve the deficiencies pointed out in our report. DOD’s comments are reprinted in appendix VIII. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of the report to interested congressional committees. We will also send copies of this report to the Secretary of Defense, the Under Secretary of Defense (Comptroller), the Secretary of the Army, the Director of the Defense Finance and Accounting Service, the Director of the Army National Guard, and the Chief of the National Guard Bureau. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9505 or kutzg@gao.gov or Geoffrey Frank, Assistant Director, at (202) 512-9518 or frankg@gao.gov. On December 5, 2001, the Colorado Army National Guard’s B Company, 5th Battalion, 19th Special Forces, was mobilized to active duty on orders for a 2-year period—through December 4, 2003. The unit was mobilized at Fort Knox and subsequently deployed in Afghanistan, Uzbekistan, and surrounding areas to search for Taliban and al Qaeda terrorists as part of Operation Enduring Freedom. The unit returned to Fort Campbell for demobilization and was released from active duty on December 4, 2002—1 year before the end of the unit’s original mobilization orders. A timeline of the unit’s actions associated to its mobilization under Operation Enduring Freedom is shown in figure 9. As summarized in table 3, the majority of soldiers from Colorado’s B Company experienced some sort of pay problem during one or more of the three phases of their active duty mobilization. Overall, all 62 soldiers with the company had at least one pay problem associated with their mobilization. These pay problems included not receiving entitled pays and allowances at all; not receiving some entitled pays and allowances within 30 days; and for some, overpayments of pays and allowances. Specifically, we found (1) 56 soldiers did not receive certain pay and allowance entitlements at all, or within 30 days of their initial mobilization, (2) 61 soldiers either did not receive, or did not receive within 30 days, the hostile fire pay or other “high-risk location” pays they were entitled to receive based on their deployment in Uzbekistan and Afghanistan, and (3) 53 soldiers either improperly continued to receive hostile fire pay after leaving high-risk locations overseas or continued to receive paychecks, as if they were still on active duty status, for over 2 months beyond their release from active duty. In total, we identified estimated overpayments of $494,000, underpayments of $28,000, and late payments of $64,000, associated with the pay problems we found. Of the estimated $494,000 in overpayments, we identified about $88,000 that was subsequently collected from the soldiers of Colorado’s B Company. In addition, in trying to correct overpayments associated with Colorado B Company’s departure from high-risk locations and release from active duty, the Defense Finance and Accounting Service (DFAS) billed 34 of the unit’s soldiers an average of $48,000 each, for a largely erroneous total debt of over $1.6 million. Many soldiers with the company characterized the service they received from the state United States Property and Fiscal Office (USPFO) and the active Army finance offices while deployed in Afghanistan and surrounding areas as “poor” or “openly hostile.” Some of the soldiers in the unit expressed significant dissatisfaction with the time and effort they, or their spouses were required to spend attempting to identify and correct their pay. These pay problems had a variety of adverse effects. The labor-intensive efforts by the special forces soldiers to address pay problems, in some cases, distracted them from important mission operations. In addition, several soldiers told us that the numerous pay problems they encountered would play a major role in their decision whether to reenlist. According to several soldiers from Colorado’s B Company, the combined effect of (1) recurring pay problems, (2) having two-thirds of their monthly training paychecks garnished to pay off often erroneous payroll-related debts, and (3) receiving poor payroll customer service during their active duty tours adversely affects morale and may have an adverse effect on a soldier’s willingness to continue his or her service with the Army Guard. For example, a unit official advised us that as of September 30, 2003, three soldiers had left B Company primarily due to frustration over pay problems. The unit official indicated that he expected additional soldiers would depart as a result of the current debt problems. As summarized in table 4, we identified a number of pay problems associated with eight different types of active duty pays and allowances related to the unit’s mobilization to active duty. These problems resulted from failure to enter data, data entry errors, or late entry of data needed by Army Guard USPFO military pay personnel and by active Army military pay personnel at the unit’s mobilization station to start active duty pays. We also found that these pay problems were exacerbated by breakdowns in customer service. In total, 56 out of 62 soldiers did not receive certain pays and allowances at all, or in a timely manner, after being activated on December 5, 2001. As illustrated in table 4, 11 soldiers did not receive entitled Jump pay within 30 days of entitlement, 10 did not receive HALO pay within 30 days of entitlement, and 41 soldiers did not receive at least 1 month of their special duty assignment pay. According to DFAS procedures, the unit’s Army Guard USPFO should have initiated these pays. In addition, these problems could have been minimized if they were identified and corrected by the Army mobilization station finance office at Fort Knox during the soldier readiness processing at that location. According to Army regulations, the active Army mobilization station is required to conduct a soldier readiness program to review every mobilizing soldier’s pay account for accuracy. In essence, under Department of Defense (DOD) guidance, the active Army mobilization stations are to act as a “safety net” to catch and correct any errors in soldiers’ active duty pays and allowances before they are deployed on their active duty missions. The underpayments and late payments resulted in adverse financial repercussions for a number of the unit’s members and their families. We were told that many of the unit members’ spouses tried to contact the soldiers while they were deployed to find out why they were not receiving the anticipated funds. We were told that neither the spouses nor the soldiers received clear guidance on whom to contact to address their pay concerns. For example, some soldiers sought help from the active Army’s finance offices at Fort Knox and Fort Campbell. However, upon contacting officials at those locations, soldiers were told that the active Army could not help them because they were Army Guard soldiers and should therefore contact their home state Army Guard USPFO. According to DFAS officials, the active Army finance offices have the capability to service Army Guard soldiers. Fort Knox and Fort Campbell finance personnel were either unaware of their capability or unwilling to take the actions needed to address the unit’s active duty pay concerns. Colorado’s B Company soldiers turned back to the USPFO for assistance. Although the USPFO did process a number of transactions to start entitled active duty pays and allowances for the unit’s soldiers, such pays were started more than 30 days after the date they were entitled to receive such pays. In one case, a soldier’s spouse had to obtain a $500 grant from the Colorado National Guard in order to pay bills while her husband was on active duty. Colorado’s B Company was deployed to Uzbekistan and Afghanistan in February 2002. As summarized in table 5, we identified pay problems associated with the hostile fire pay, combat zone tax exclusion, and hardship duty pay that unit soldiers were entitled to receive based on their deployment to Afghanistan and surrounding areas. Specifically, after arriving in Afghanistan, some soldiers in Colorado’s B Company received these pays sporadically, were not paid at all, were paid but for inexplicable dollar amounts, or were overpaid their entitled active duty pays and allowances while deployed. For example, 16 of the 62 soldiers in B Company received the wrong type of hardship duty pay, formerly called Foreign Duty Pay, in addition to the correct hardship duty location pay while they were deployed in Afghanistan. We found that these pay problems could be attributed, in part, to the active Army servicing finance office’s lack of knowledge about how to process transactions through the Defense Joint Military Pay System-Reserve Component system (DJMS-RC) to start location-based pays and allowances for the unit’s soldiers. For example, we were told that because active Army in-theater finance personnel were unfamiliar with the required procedures to follow in starting hardship duty pays, they entered transactions that resulted in soldiers receiving two different location-based types of hardship duty pay for the same duty. Further, Army Guard soldiers told us the active Army finance office could not effectively answer questions concerning their pay entitlements or transaction processing documentation requirements. After not receiving any pay support from the active Army servicing finance location, the unit’s soldiers told us they contacted their Army Guard USPFO in Colorado for assistance. However, Colorado USPFO officials informed them that they did not have the capability to start location-based pays and allowances for Army Guard soldiers. A frequent complaint we received from Colorado’s B Company soldiers concerned the circular nature of any attempts to get assistance on pay issues while deployed overseas. B Company’s soldiers told us they spent significant amounts of time and effort trying to correct the pay problems while deployed on critical mission operations in Afghanistan and surrounding areas—time and focus away from the mission at hand. For example, as discussed in greater detail in our West Virginia case study summary, a soldier from that unit took several days away from his unit to get location-based pay started for both the West Virginia and Colorado special forces units. We were also told that some members of the unit used their satellite radios to attempt to resolve their pay problems while deployed in Afghanistan. In addition, several of the unit’s soldiers told us their ability to identify and correct pay problems while deployed was impaired by limited access to telephones, faxes, e-mail, and their current Leave and Earnings Statements. In the late summer to early fall of 2002, soldiers from Colorado’s B Company began returning from Afghanistan and surrounding areas to Fort Campbell to begin their demobilization from active duty. However, the active Army’s finance office at Fort Campbell failed to properly stop soldiers’ pay as of their demobilization dates, which for most of the unit’s soldiers was December 4, 2002. As summarized in table 6, 39 of the unit’s 62 soldiers continued to receive active duty pay and allowances, some until February 14, 2003—2 and a half months after the date of their release from active duty. We found that both the active Army servicing finance location for the unit while it was in Afghanistan and at Fort Campbell upon its return to the United States did not take action to stop active duty pays and allowances. According to DFAS procedures, the finance office at the servicing demobilization station is to conduct a finance out-processing, which would include identifying and stopping any active duty pays that soldiers were no longer entitled to receive. According to DFAS-Indianapolis Reserve Component mobilization procedures, the local servicing active Army finance office also has primary responsibility for entering transactions to stop hardship duty pay, hostile fire pay, and the combat zone tax exclusion when soldiers leave an authorized hostile fire/combat zone. However, in this case, that office did not take action to stop these types of pay and allowances for many of the unit’s soldiers. For example, military pay personnel at Fort Campbell failed to deactivate hostile fire pay for 41 out of 62 B Company soldiers. With regard to customer service, some soldiers in the unit told us that upon their return from overseas deployments, they were informed that they should have corrected these problems while in- theater, despite the fact that these problems were not detected until the demobilization phase. Colorado’s B Company demobilization was complicated by the fact that the unit did not demobilize through the same active Army location used to mobilize the unit. DFAS procedures provide that Army Guard soldiers are to demobilize and have their active duty pays stopped by the installation from which they originally mobilized. However, the unit received orders to demobilize at Fort Campbell rather than Fort Knox where they originally mobilized. According to Fort Campbell personnel, Colorado’s B Company out-processed through the required sections, including finance, during their demobilization. Nonetheless, the finance office at that active Army location failed to stop all active duty pays and allowances when the unit was demobilized from active duty. Fort Campbell finance office personnel we interviewed were not present during B Company’s demobilization and had no knowledge of why pay was not stopped during the demobilization process. Failure to stop location-based and other active duty pays and allowances for the unit’s soldiers resulted in overpayments. As a result of the Colorado USPFO’s errors made in attempting to amend the unit’s orders to reflect an earlier release date than the date reflected in the unit’s original mobilization orders, large debts were created for many soldiers in the unit. Specifically, largely erroneous soldier debts were created when personnel at the Colorado USPFO inadvertently revoked the soldiers’ original mobilization orders when attempting to amend the orders to reflect the unit’s actual release date of December 4, 2002—1 year before the end of the unit’s original orders. As a result, 34 soldiers received notice on their Leave and Earnings Statements that rather than a debt for the 2 and a half months of active duty pay and allowances they received after their entitlement had ended, they owed debts for the 11 months of their active duty tour—an average of $48,000 per soldier, for a total debt of $1.6 million. Several of the soldiers in the company noticed the erroneous debt and called their unit commander. Some of the soldiers wanted to settle the debt by writing a check to DFAS. However, they were told not to because the exact amount of each soldier’s debt could not be readily determined and tracking such a payment against an as-yet undetermined amount of debt could confuse matters. Meanwhile, some soldiers now returned from active duty, resumed participation in monthly training, and began having two-thirds of their drill pay withheld and applied to offset their largely erroneous debt balances. We were told that it would take approximately 4 to 5 years for the soldiers to pay off these debts using this approach. On April 17, 2003, and in a subsequent June 20, 2003, letter, we brought this matter to the attention of DFAS and the DOD Comptroller, respectively. Table 7 provides an overview of the actions leading to the creation of largely erroneous payroll-related debts for many of the unit’s soldiers and DOD’s actions to address these largely erroneous debts. Despite considerable time and effort of DFAS and others across the Army Guard and Army, as of the end of our fieldwork in September 2003, Colorado’s B Company debt problems had not been resolved. In fact, for one sergeant, his pay problems were further complicated by these efforts. For example, in attempting to reduce the soldier’s recorded $30,454 debt by $20,111, DFAS instead sent the soldier a payment of $20,111. As of September 2003, about 9 months after his demobilization, the sergeant’s reported unpaid debt balance was $26,806, but the actual amount of his debt remained unresolved. On January 2, 2002, the Virginia Army National Guard’s B Company, 3rd Battalion, 20th Special Forces, was called to active duty in support of Operation Enduring Freedom for a 1-year tour. The unit in-processed at Fort Pickett, Virginia, and departed for Fort Bragg, North Carolina. The unit mobilized at Fort Bragg and for the next several months performed various duties on base until May 2002. In early May 2002, Virginia’s B Company deployed to Afghanistan to perform search and destroy missions against al Qaeda and Taliban terrorists. Although several of B Company’s soldiers returned from Afghanistan during August and September 2002, most of the unit’s members returned to Fort Bragg for demobilization during October 2002 and were released from active duty on January 2, 2003. A timeline of the unit’s actions associated with its mobilization under Operation Enduring Freedom is shown in figure 10. As summarized in table 8, the majority of soldiers from Virginia’s B Company experienced some sort of pay problem during one or more of the three phases of their active duty mobilization. Overall, 64 of the 65 soldiers with the company experienced at least one pay problem associated with their mobilization. These pay problems included not receiving entitled pays and allowances at all; not receiving some entitled pays and allowances within 30 days; and for some, overpayments of pays and allowances. Specifically, we found (1) 31 soldiers did not receive certain pay and allowance entitlements at all, or within 30 days of their initial mobilization entitlement, or were overpaid, (2) 63 soldiers either did not receive, or did not receive within 30 days, the hardship duty pay or other high-risk location pays they were entitled to receive based on their deployment to Afghanistan, and (3) 60 soldiers improperly continued to receive hardship duty pay or hostile fire pay after leaving high-risk locations overseas. In total, we identified estimated overpayments of $25,000, underpayments of $12,000, and late payments of $28,000 associated with the pay problems we found. Of the estimated $25,000 in overpayments, we identified about $2,000 that was subsequently collected from the soldiers. Our audit showed that the pay problems experienced by Virginia’s B Company were the result of a number of factors, including late submission of required pay support documents, incorrect pay inputs by Army personnel, and an active Army in-theater finance office’s lack of knowledge about the unit’s presence in Afghanistan. These pay problems had a number of adverse effects. Several B Company soldiers we interviewed expressed dissatisfaction with the time and effort they, or their spouses, were required to spend attempting to identify and correct problems with their pay. Another complaint concerned the circular nature of any attempts to get assistance. For example, we were told the USPFO referred soldiers to the active Army finance office and that office referred them back to the USPFO. Virginia USPFO officials informed us that the circular nature of giving assistance to soldiers was sometimes unavoidable. For example, they said that once soldiers left their home unit and the Fort Bragg and in-theater finance offices assumed pay responsibilities, the USPFO informed soldiers and their spouses to contact these active Army finance offices to discuss active duty payment problems. USPFO officials acknowledged that in instances in which the active Army finance office did not resolve soldiers’ pay problems, USPFO staff would try to fix the problems. According to several soldiers, the combined effect of recurring pay problems and receiving poor payroll customer service during their active duty tours adversely affects morale and may have a negative effect on the soldiers’ willingness to continue serving with the Army National Guard. Several soldiers told us that the numerous pay problems they encountered would play a major role in their decisions whether to reenlist. As summarized in table 9, we identified a number of pay problems associated with the unit’s mobilization to active duty. These problems resulted from failures by unit soldiers to provide necessary documentation to initiate certain pays, and data entry errors or late entry of data needed to start active duty pays by Army Guard USPFO military pay personnel and/or by active Army military pay personnel at the unit’s mobilization station. We identified 31 out of 65 soldiers from Virginia’s B Company who did not receive certain types of pay at all, were not paid in a timely manner, or were overpaid after being activated on January 2, 2002. The types of pay for which most problems occurred during mobilization were parachute jump pay, foreign language proficiency pay, HALO pay, and basic pay. As shown in table 9, we identified 8 soldiers who were underpaid for jump pay, 10 soldiers who were underpaid for foreign language pay, and 10 soldiers who were overpaid for HALO pay. Prior to being mobilized, the soldiers in Virginia’s B Company attended a soldier readiness program at the USPFO at Fort Pickett, Virginia. Part of this program was intended to ensure that soldiers had proper administrative paperwork and financial documents necessary to start all entitled active duty pays at mobilization. Virginia USPFO personnel who conducted the finance portion of B Company’s soldier readiness program verified soldiers’ supporting financial documentation and updated, if necessary, each soldier’s Master Military Pay Accounts (MMPA). This verification process disclosed that many soldiers had unresolved pay errors that had occurred as far back as 1996. According to U.S. Army Forces Command Regulation 500-3-3, these problems should have been corrected during required annual soldier readiness reviews conducted at the unit’s home station. As part of our analysis of the unit’s pay, we determined that some of these long-standing pay problems had been resolved. For example, over $22,500 was processed for 52 B Company soldiers’ and included in soldiers’ pay distributions from October 2001 to March 2003. USPFO officials told us that they have been working with a sergeant from Virginia’s B Company who performed a detailed analysis of soldiers’ long-standing pay problems in addition to pay problems that occurred subsequent to January 2002 for the majority of their mobilization. This sergeant performed these pay-related tasks in addition to his mission- related duties as a professional engineer. After leaving the unit’s home station, B Company traveled to Fort Bragg, its active Army mobilization station. Fort Bragg personnel conducted a second soldier readiness program that was intended to identify and fix any pay issues not resolved at the home station. According to USPFO officials and active Army finance office officials at Fort Bragg, problems with jump pay and foreign language pay occurred at mobilization because the necessary documentation to support jump pay eligibility or language proficiency for a number of soldiers was not always provided to the USPFO or the mobilization station. For example, of the 8 soldiers in the unit who were underpaid for jump pay, 4 did not receive jump pay until mid- February 2002 and 1 did not begin to receive jump pay until mid-March. In another instance, we identified 10 soldiers who were eligible to receive foreign language proficiency pay in January 2002, but did not receive payments for 1 or more months after they became eligible. Further, nine soldiers in the unit were eligible for HALO pay in January 2002. However, again, in part because of the lack of proper documentation from the unit’s soldiers, but also because of pay input errors at the active Army finance unit at Fort Bragg, pay problems occurred for seven of the nine soldiers during January 2002, the initial month of their mobilization. The seven soldiers eligible for HALO pay received both jump pay as well as HALO pay during January 2002, which resulted in overpayments to these soldiers. These overpayments occurred because Fort Bragg, unaware that the USPFO had previously processed HALO pay for these soldiers, processed HALO pay a second time, based on supporting documentation received from the unit. Also, we found that two soldiers, who were not eligible to receive HALO pay, received HALO pay for 3 months and another soldier received HALO pay starting in January but did not become eligible for this pay until mid-April 2002. Documentation was not available to explain these errors. In May 2002, Virginia’s B Company left Fort Bragg and traveled to Afghanistan to assist in missions against al Qaeda and Taliban forces. While in Afghanistan, the soldiers encountered additional pay problems related to hardship duty pay, special duty assignment pay, and, to a lesser extent, hostile fire pay and basic pay. Also, the soldiers experienced problems in receiving the full amounts of their entitled HALO pay. Table 10 summarizes the pay problems we identified for the unit while it was deployed. Once the soldiers arrive in-theater, an active Army finance office assigned to the unit is responsible for initiating assignment and location-based pays for the unit’s soldiers in DJMS-RC. However, we found that the active Army in-theater finance offices did not always know which units they were responsible for servicing or their location. The in- theater finance office for Virginia’s B Company, located in Kuwait, did not start these pays as required. We were told that this occurred because finance personnel in Kuwait did not know that B Company had arrived in Afghanistan. Virginia’s B Company soldiers, who were not regularly receiving their leave and earnings statements while in Afghanistan, told us they became concerned that they were not receiving pays they were entitled to while deployed based on conversations with their spouses. After attempts to initiate location-based pays at the battalion finance unit in Afghanistan were unsuccessful because finance personnel at that location were not familiar with DJMS-RC’s transaction processing requirements for starting these types of pay, two soldiers were ordered to travel to Camp Snoopy, Qatar, where another Army finance office was located. Attempts to start assignment and location-based pays for the unit’s soldiers at Camp Snoopy were also unsuccessful. One of the soldiers told us that they flew to Kuwait because they were advised that the finance unit at that active Army finance office was more knowledgeable about how to enter the necessary transactions into DJMS-RC to pay the unit’s soldiers. The soldier told us he took an annotated battle roster listing the names of all Virginia’s B Company soldiers deployed in and around Afghanistan at that time and the dates they arrived in country with him as support for starting the unit’s in theater-based pays. Finally, at Kuwait the appropriate in-theater pays were activated and the two soldiers returned to Afghanistan. As shown in figure 11, the entire trip required interim stops at eight locations because of limited air transportation and took about a week. Despite this costly, time-consuming, and risky procedure to start location- based pays for the unit, 63 of Virginia’s B Company soldiers, who became eligible for hardship duty pay in May 2002, not receive their location-based pay entitlements until July 2002. Problems with special duty assignment pay also occurred during the unit’s deployment. We found that both underpayments and overpayments of this type of pay were made as a result of confusion about who was responsible for making the manual monthly transactions necessary for entitled soldiers in the unit to receive these pays.. For example, 10 soldiers in B Company did not receive at least 1 month of entitled special duty assignment pay. Conversely, overpayments of this type of pay were made when B Company left Afghanistan and returned to Fort Bragg to demobilize in October 2002, and both the active Army finance office at Fort Bragg and the Virginia USPFO entered special duty assignment pay transactions for the unit’s eligible soldiers. Fort Bragg processed October and November 2002 special duty assignment duty payments for 24 of the unit’s soldiers in December 2002. Virginia’s USPFO, unaware that Fort Bragg had made these payments in December 2002, also paid all 24 eligible soldiers special duty assignment pay for October and November 2002 several months later. USPFO officials explained that their military pay office processed the payments because B Company submitted the necessary documentation certifying that the unit’s soldiers were entitled to receive back pay for missed special assignment duty pays. The officials told us that special duty assignment pay was processed because, having received this certification from the unit, they assumed that payments had not yet been made. Virginia’s B Company soldiers also experienced problems with HALO pay during deployment. We identified 11 B Company soldiers eligible for HALO pay who did not receive 1 or more months of this pay as of March 31, 2003. We determined that these problems occurred because such pays require manual monthly input, and the pay technicians inadvertently did not make the required entries each month. In addition, 2 of the unit’s soldiers did not receive all hostile fire payments to which they were entitled. One soldier did not receive the first month of entitled hostile fire pay for May 2002, and the other soldier received hostile fire pay for May 2002 but not for the remaining months of his deployment. Although some soldiers in B Company left Afghanistan during August and September 2002, most of the unit returned to Fort Bragg in October 2002 to begin the demobilization process. As summarized in table 11, 57 soldiers continued to receive pays to which they were no longer entitled because they left Afghanistan, including either hostile fire pay, hardship duty pay, or both. According to DOD mobilization procedures, the finance office at the servicing demobilization station is to conduct a finance out-processing. The finance office is responsible for inputting transactions to stop certain location-based pays, such as hardship duty pay and hostile fire pay. In addition, according to DOD’s Financial Management Regulation (FMR), Volume 7A, chapters 10 and 17, location-based pays must be terminated when the soldier leaves the hostile fire/combat zone. Overpayments to B Company soldiers occurred during demobilization because the in-theater finance office continued to make hostile fire and hardship duty pays after soldiers left Afghanistan in October 2002, and the Fort Bragg active Army finance office did not enter transactions into DJMS- RC to stop these payments as required. We found that 55 of 64 soldiers eligible for hostile fire pay were overpaid for at least 1 month beyond their departure from Afghanistan. Also, we found that 57 of 64 soldiers eligible for hardship duty pay were overpaid at least part of 1 month. A Fort Bragg official explained that the Army finance office personnel at Fort Bragg were not aware that these payments were still being made after the soldiers had returned to the United States, but, subsequently determined that hostile fire and hardship duty overpayments were occurring and took action to terminate the payments. Also, four members of Virginia’s B Company, who were injured while deployed in Afghanistan, returned to Fort Bragg and requested medical extensions to their active duty tours so they could continue to receive active duty pay and medical benefits until they recovered. One of the soldiers told us, “People did not know who was responsible for what. No one knew who to contact or what paperwork was needed ….” To support themselves and their families, these four soldiers needed the active duty military pay they were entitled to receive while obtaining medical treatment and recovering from their injuries. However, after risking their lives for their country, all four have had gaps in receiving active duty pay while they remained under a physician’s care after their demobilization date and have experienced financial difficulties. In addition, when active duty pay was stopped, the soldiers’ medical benefits were discontinued. As discussed earlier in this report, these pay-related problems for wounded soldiers caused significant hardship for them and their families. On December 5, 2001, West Virginia’s 19th Special Forces Group, 2nd Battalion, C Company, was called to active duty in support of Operation Enduring Freedom for a 1-year tour. The unit was mobilized at Fort Knox and subsequently deployed in Afghanistan, Uzbekistan, and surrounding areas to search for possible Taliban and al Qaeda terrorists. The unit returned to Fort Campbell for demobilization and was released from active duty on December 4, 2002. A timeline of the unit’s actions associated with its mobilization under Operation Enduring Freedom is summarized in figure 12. As summarized in table 12, the majority of soldiers from C Company experienced some sort of pay problem during one or more of the three phases of their active duty mobilization. Overall, 86 of the 94 soldiers with the company experienced at least one pay problem associated with its mobilization. Specifically, we identified (1) 36 soldiers who were either overpaid, did not receive certain pay and allowance entitlements at all, or did not receive pay within 30 days of their initial mobilization entitlement, (2) 84 soldiers who were either overpaid, did not receive, or did not receive within 30 days, the hostile fire pay or other high-risk location pays they were entitled to receive based on their deployment in Uzbekistan and Afghanistan, and (3) 66 soldiers who did not receive, or did not receive within 30 days, their special duty assignment pay during their demobilization. In total, we identified estimated overpayments of $31,000, underpayments of $9,000, and late payments of $61,000 associated with the identified pay problems. We did not identify any collections related to overpayments for this unit. As summarized in table 13, several soldiers from C Company did not receive the correct pay or allowance when called to active duty. We found that some soldiers received payments over 5 months late and other soldiers had been overpaid. Seven soldiers did not receive their $225 per month HALO pay until over a month after mobilization, and 18 other soldiers received combat diver pay and HALO pay to which they were not entitled. Prior to being mobilized, the soldiers in C Company attended a soldier readiness program at their unit armory. This program was intended to ensure that all soldiers had proper administrative paperwork and financial documents and were physically fit for the ensuing mobilization. West Virginia USPFO personnel who conducted the finance portion of C Company’s soldier readiness program were required to verify soldiers’ supporting financial documentation, and update, if necessary, soldiers’ pay records in DJMS-RC. Soldiers not submitting the correct paperwork at the time of the Soldier Readiness Program caused some payments to be late. For example, according to the USPFO, one soldier did not submit the proper paperwork for his family separation allowance. The delay in submission caused his first payment to be over 3 months late. Another problem with the unit’s mobilization related to 17 soldiers who had significant problems with their HALO pay. According to USPFO personnel, the unit commander for C Company did not provide the USPFO a list of the unit members who were eligible to receive HALO pay. Therefore, the USPFO paid all the unit members who were parachute qualified the regular parachute pay. Once the USPFO received a list of the unit’s 17 HALO- qualified soldiers, pay personnel attempted to recoup the regular jump pay and pay the HALO team the increased HALO pay amount. USPFO personnel told us they did not know how to initiate a payment for the difference between regular jump and HALO pay. Consequently, they entered transactions to recoup the entire amount of jump pay and then initiated a separate transaction to pay the correct amount of HALO pay. According to the DOD FMR, volume 7A, chapter 24, soldiers who are eligible to receive regular parachute pay and HALO pay are paid the higher of the two amounts, but not both. In this case, the 17 members of C Company’s HALO team should have received a $225 per month payment from the beginning of their mobilization. Pay records indicate that this correction initiated by the USPFO occurred about 2 months after the unit mobilized. When the USPFO personnel attempted to collect the soldiers’ regular parachute pay, they inadvertently collected a large amount of the soldiers’ basic active duty pay for the first month of their mobilization. Personnel at the USPFO stated that the error caused debts on soldiers’ accounts but was corrected immediately after a pay supervisor at the USPFO detected the error in February. Even after the soldiers’ pay was corrected, USPFO personnel did not stop the regular parachute pay for the HALO team members, but instead let it continue, then collected the $150 per month parachute pay manually, and then paid the correct $225 per month HALO pay. This error-prone, labor- intensive manual collection and subsequent payment method used by the USPFO personnel to pay C Company’s HALO team the higher HALO rate of pay was not consistently applied each month and resulted in 7 soldiers being overpaid when their regular parachute pay was not collected. In addition to the 7 soldiers who were actually on the HALO team, 10 other soldiers were on the initial list given to the USPFO but were actually not on the HALO team. The unit commander for C Company provided a more accurate list to the USPFO some time after the first list, and only members on the more accurate list continued to receive HALO pay. However, USPFO pay personnel did not attempt to collect the HALO pay from unit members on the first list who had incorrectly received HALO pay. As a result of this complex collection and payment process, the unit’s soldiers were confused about whether they were receiving all their entitled active duty pays while mobilized. After leaving the unit’s home station, C Company traveled to Fort Knox, its active Army mobilization station. As required by Army guidance, Fort Knox personnel conducted a second soldier readiness program to identify and fix unresolved pay issues associated with the unit’s mobilization. Based on our findings that the pay problems continued after this review, it does not appear that the active Army finance office at Fort Knox carried out its responsibility to review and validate all of C Company soldiers’ active duty pays and allowance support records. Problems with HALO and family separation pay were not resolved for several months after the mobilization. As a result, the soldiers’ pay problems persisted into their deployment overseas. As summarized in table 14, we identified a number of pay problems associated with three different types of active duty pays related to the unit’s deployment. After going through initial in-processing at Fort Knox, C Company soldiers traveled to Fort Campbell where they prepared to deploy overseas. Starting in December 2001, members of C Company traveled to Uzbekistan and Afghanistan to perform special forces missions. During their deployment overseas, C Company soldiers consistently experienced problems related to specific location-based payments such as hostile fire pay and hardship duty pay. In 78 cases, the payments were not started within 30 days from when the soldiers were entitled to the payments. In 22 other cases, we determined that soldiers had not received all location- based pays as of March 31, 2003. In 60 cases, the soldiers were overpaid or payments were not stopped when they left the combat zones. Due to the lack of supporting documents at the state, unit, and battalion-levels, dates for when each soldier entered and left combat zones were not always available. Consequently, there may have been other deployment-related pay problems for C Company that we were not able to identify. According to DFAS policy, when soldiers from C Company arrived in Uzbekistan the in-theater finance office in Uzbekistan was responsible for initiating location-based payments for the unit. Unit personnel stated that the staff in the finance office in Uzbekistan were not adequately trained in how to input pays into DJMS-RC. Initially, we were told the Uzbekistan finance office incorrectly believed it was the West Virginia USPFO’s responsibility to start location-based pays for the deployed soldiers from C Company. The active Army finance office in Uzbekistan instructed the unit to contact the West Virginia USPFO to start location-based pays. However, DFAS policy clearly states that it is the active Army in-theater finance office’s responsibility to start and maintain monthly location-based payments. After attempts by the unit administrator and the Uzbekistan finance office failed to initiate the payments, a sergeant in C Company was ordered to travel to Camp Doha, Kuwait, to have the unit’s location-based pays started. The soldier stated that he traveled to Camp Doha because he was told that the finance unit at that active Army finance location was more knowledgeable in how to enter transactions into DJMS-RC to initiate location-based pays for the unit’s soldiers. The soldier took with him all the necessary paperwork to have the pays started for all the companies under the battalion, including C Company. On the return flight from the sergeant’s mission in Kuwait, his plane encountered enemy fire and was forced to return to a safe airport until the next day. The failure by active Army personnel at the finance office in Uzbekistan to enter the transactions necessary to start location-based pays for the unit delayed payments to some soldiers for up to 9 months and put one soldier in harm’s way. Per DOD FMR, volume 7A, chapter 10, soldiers who perform duty in hostile fire zones are entitled to hostile fire pay as soon as they enter the zone. However, we found that 45 soldiers in C Company did not have their hostile fire pay started until over 30 days after they were entitled to receive it. Some of C Company’s soldiers received retroactive payments over 2 months after they should have received their pay. In addition, as of March 31, 2003, we determined that 18 soldiers from the unit were not yet paid for 1 or more months that they were in the hostile fire zone. We also identified 40 soldiers who received hostile fire pay after they had left the country and were no longer entitled to receive such pays. These overpayments occurred primarily because hostile fire pay is an automatic recurring payment based on the start and stop date for the soldier’s mobilization entered into DJMS-RC. However, in this case, the active Army finance office in Uzbekistan did not amend the stop dates for automated active duty pays in DJMS-RC to reflect that C Company left the designated area before the stop date entered into DJMS-RC. The active Army finance office’s failure to follow prescribed procedures resulted in overpayment of this pay to 40 soldiers. Per DOD FMR, volume 7A, chapter 17, soldiers who perform duties in designated areas for over 30 days are entitled to the hardship duty pay incentive. The FMR provides for two mutually exclusive types of hardship duty pay for identified locations—one according to specified “designated areas” and the other for specified “certain places.” Effective December 31, 2001, the regulation no longer permitted soldiers newly assigned to locations specified as “certain places” to begin receiving hardship duty pay. However, the regulation specified Afghanistan and Uzbekistan as designated areas and provided for paying $100 a month to each soldier serving there. While deployed to Afghanistan and Uzbekistan, 29 soldiers in C Company were mistakenly provided both types of hardship duty pay. The local finance office in Uzbekistan correctly entered transactions to start C Company’s hardship duty pay for designated areas into the DJMS-RC pay system. Due to limitations in DJMS-RC, the local finance office was required to manually enter the designated area payments for each soldier every month the unit was in a designated area. However, DFAS documentation shows that finance personnel at Fort Bragg incorrectly initiated a recurring certain places hardship duty payment for soldiers in C Company. For some soldiers, payments continued until May 31, 2002 and for others the payments continued until the end of their tour of active duty on December 4, 2002. These erroneous certain places hardship duty pays resulted in overpayments. In addition, because DJMS-RC processing capability limitations required the designated areas payment to be manually entered every month the unit was in the designated area, the in-theater finance office in Uzbekistan failed to consistently enter the monthly designated area payments for all entitled soldiers. Throughout the time C Company was in Uzbekistan and Afghanistan, we identified a total of 5 soldiers who missed one or more monthly payments of entitled hardship duty designated area pay. Other soldiers received entitled payments over 9 months late. Still others were paid more than once for the same month or paid after leaving the designated area, resulting in overpayments to 12 soldiers. The mix of erroneous certain places hardship duty payments along with sporadic payments of the correct type of designated area hardship duty pay caused confusion for the soldiers of C Company and their families regarding what types of pay they were entitled to receive and whether they received all active duty entitlements. C Company returned to Fort Campbell during the fall of 2002 to begin the demobilization process. By October 2002, all of the unit had returned from overseas and was demobilized on December 4, 2002. As shown in table 15, 66 of C Company’s 94 soldiers experienced pay problems associated with their demobilization from active duty. In October 2002, eligible soldiers in the unit were entitled to a special duty assignment pay increase from $110 per month to $220 per month. To initiate this higher pay rate, the West Virginia Army National Guard military personnel office was required to cut new special duty assignment pay orders for all eligible C Company soldiers. USPFO officials stated that they could not pay the increased amount until they received a copy of the new orders. The USPFO personnel did not continue to pay the $110 a month to the soldiers because they did not want to have to recoup the old amount and then pay the correct amount when orders were received. However, the orders for the soldiers were not received by the USPFO for several months, which created a delay in the payment of the soldiers’ special duty assignment pay. Supporting documents showed that a delay in the production of the orders by the West Virginia Army National Guard military personnel office caused the late payments. For C Company, 63 soldiers received their last 3 months of special duty assignment pay over 30 days late. Another 3 soldiers did not receive their last 3 months of special duty assignment pay because the USPFO inadvertently overlooked the manual transaction entries required to process special duty assignment pay for those soldiers. On December 27, 2001, the Mississippi Army National Guard’s 114th Military Police Company was called to active duty in support of Operation Noble Eagle for a 1-year tour—through January 5, 2003. The unit mobilized in Clinton, Mississippi, and departed for Fort Campbell, Kentucky, on January 6, 2002. The unit in-processed at Fort Campbell and for the next 5 months performed military police duties at Fort Campbell until early June. On June 10, 2002, the 114th Military Police Company deployed to Guantanamo Bay, Cuba, to perform base security and guard duties for Taliban and al Qaeda prisoners. After guarding detainees in Cuba for approximately 6 months, the unit returned to Fort Campbell in late November 2002. At Fort Campbell the unit out-processed and returned to Clinton, Mississippi, and was released from active duty on January 5, 2003. A time line of actions associated with the unit’s active duty mobilization is shown in figure 13. As summarized in table 16, at every stage of the unit’s 1-year tour of active duty, soldiers experienced various pay problems. Of the 119 soldiers of the Mississippi Army National Guard’s 114th Military Police Company, 105 experienced at least one pay problem associated with mobilization in support of Operation Noble Eagle. Specifically, we found that (1) 21 soldiers experienced underpayments, overpayments, or late payments, or a combination of these, during their initial mobilization, including some soldiers who did not receive payments for up to 7 months after their mobilization dates, and others who still have not received certain payments, (2) 93 soldiers experienced underpayments, overpayments, late payments, or some combination, during their tour of active duty at Fort Campbell and in Cuba, including in-theater incentives such as hardship duty pay, and (3) 90 soldiers experienced underpayments, overpayments, late payments, or a combination of these, during their demobilization at Fort Campbell, including problems related to the continuation of in-theater incentives and overpayment of active duty pay after demobilization. In total, we identified estimated overpayments of $50,000, underpayments of $6,000, and late payments of $15,000 associated with the pay problems we found. Of the estimated $50,000 in overpayments, we identified about $13,000 that was subsequently collected from the unit’s soldiers. As summarized in table 17, we found that 21 soldiers from the 114th Military Police Company experienced underpayments, overpayments, late payments, or some combination related to pay and allowance entitlements when called to active duty. For example, several soldiers did not receive their entitled $100 per month family separation allowance until 7 months after mobilization, and several other soldiers did not receive the correct type of basic allowance for housing as specified in the DOD FMR, Volume 7A, chapter 26. Prior to being mobilized, the soldiers in the 114th Military Police Company attended a soldier readiness program at their unit armory. The purpose of this review was to ensure that all soldiers had proper administrative paperwork and financial documents and were physically fit for the ensuing mobilization. Mississippi USPFO personnel, who conducted the finance portion of the 114th Military Police unit’s soldier readiness program, were required to verify soldiers’ supporting financial documentation, and update, if necessary, soldiers’ MMPAs. Not submitting the complete and current paperwork at the time of the soldier readiness program contributed to some of the late payments we identified. For example, some soldiers did not receive their family separation allowance because they did not provide documentation supporting custody arrangements. However, we also found that confusion at the USPFO over the eligibility of single parents contributed to these late pays. It was later in the unit’s active duty tour that finance officers initiated action for 11 of the 114th Military Police unit’s soldiers to receive retroactive payments, some for as much as 7-months of back pay. In another case, a former Special Forces soldier improperly received jump pay even though his assignment to this military police unit did not require that special skill. Five soldiers improperly received active duty pay and allowances even though they did not mobilize with the unit. Because these five soldiers were not deployable for a variety of reasons, they were transferred to another unit that was not subject to the current mobilization. However, the delay in entering the transfer and stopping pay caused each of these soldiers to receive active duty pay for 10 days. Several other soldiers received promotions at the time of their mobilization, but state military pay personnel at the USPFO did not enter transactions for the promotions until several months later, resulting in late promotion pay to the affected soldiers. Delays by the unit in submitting the promotion paperwork or by the state personnel office in entering the promotion paperwork into the personnel system caused these problems. However, supporting documents were not available to enable us to determine the specific cause of the delays. After leaving the unit’s home station, the 114th Military Police Company traveled to Fort Campbell, its active Army mobilization station. As required by Army guidance, Fort Campbell personnel conducted a second soldier readiness program intended, in part, to verify the accuracy of soldiers’ pay records. However, instead of conducting a thorough review of each soldier’s pay record, Fort Campbell finance personnel performed only a perfunctory review by asking the soldiers if they were experiencing pay problems. At this point, because the soldiers had only recently mobilized and had not received their first paychecks, they were unaware of pay problems. Failure to follow requirements for finance verification at Fort Campbell of each soldier’s pay account caused pay problems to persist past the mobilization stage. In addition, we were unable to determine specific causes for certain pay problems associated with the unit’s mobilization because the unit remobilized in February 2003, and unit administrative personnel did not retain payroll source documents relating to the prior mobilization. As summarized in table 18, we identified a number of pay problems associated with four types of active duty pays and allowances associated with the unit’s deployment while on active duty. While at Fort Campbell, eight soldiers experienced problems resulting from delays in entering changes in the family separation allowance, basic allowance for housing, and active duty pay increases from promotions. For example, one soldier was promoted to the rank of Private First Class at the end of May, but the pay system did not reflect the promotion until October. Although the soldier eventually received retroactive promotion pay, the delay caused the soldier to be paid at her old rank for 5 months. According to DFAS guidance, when a change occurs in a soldier’s pay, the on-site Army finance office should input the change. In cases where personnel changes occurred that affected pay, either the soldiers failed to submit documents or personnel at Fort Campbell failed to input the changes. Due to the lack of documentation, we could not determine the origin of the delays. During the unit’s deployment to Guantanamo Bay, Cuba, the soldiers encountered additional pay problems related to hardship duty pay, a location-based payment for soldiers located at designated hardship duty locations. Some soldiers received extra hardship duty payments, while others were only paid sporadically. In total, only 9 of the 100 soldiers who deployed to Guantanamo Bay with the 114th Military Police Company received the correctly computed hardship duty pay. Per DOD FMR, Volume 7A, chapter 17, soldiers who perform duties in designated areas for over 30 days are entitled to the hardship duty pay incentive. The FMR provides for two mutually exclusive types of hardship duty pay for identified locations; one according to specified “designated areas” and the other for specified “certain places.” Effective December 2001, the regulation no longer permitted soldiers newly assigned to locations specified as certain places to begin receiving hardship duty pay. However, the regulation specified Guantanamo Bay, Cuba, as a designated area and provided for paying $50 a month to each soldier serving there. Most of the 114th Military Police unit’s soldiers were mistakenly provided both types of hardship duty pay while deployed to Cuba. Upon arrival in Cuba, the local Guantanamo Bay finance office correctly entered transactions to start hardship duty pay for designated areas for the 114th Military Police unit’s soldiers into DJMS-RC. However, unknown to Guantanamo finance personnel, Fort Campbell finance personnel, upon the unit’s departure to Cuba, incorrectly initiated recurring certain places hardship duty payments for the soldiers of the 114th Military Police unit. These payments of both types of hardship duty pay resulted in overpayments to 88 enlisted soldiers of the 114th Military Police Company during the time the soldiers were stationed in Cuba. In addition, as a result of personnel turnover and heavy workload in the active Army’s Guantanamo Bay finance office and limitations in DJMS-RC, the Guantanamo Bay finance office did not make all the required monthly manual transaction entries required to pay hardship duty pays to the 114th Military Police Company’s soldiers. As a result, several soldiers in the unit did not receive one or more monthly hardship duty payments. Limitations in DJMS-RC required the local finance office to manually enter the designated area payments for each soldier on a monthly basis. For 11 soldiers, the finance office inadvertently overlooked entering one or more monthly hardship duty payments. The combination of erroneous certain places payments, along with sporadic payments of hardship duty designated area pays caused confusion for the soldiers who were performing a stressful mission in Cuba regarding whether they were receiving all their active duty pay entitlements. The 114th Military Police Company returned to Fort Campbell on November 23, 2002, to begin the demobilization process. During demobilization, soldiers continued to experience pay problems. As summarized in table 19, overpayment problems consisted of improper continuation of hardship duty pay following the unit’s return from Cuba and failure to stop active duty pay and allowances to soldiers who were discharged or returned from active duty early. According to the DOD FMR, Volume 7A, chapter 17, soldiers are entitled to receive hardship duty pay only while they are stationed in a hardship duty location. While the active Army’s Guantanamo Bay finance office stopped monthly designated area payments upon the unit’s departure from Cuba, the Fort Campbell finance office did not discontinue the incorrect certain places payments that its finance office had initiated months earlier. Consequently, 85 of 88 soldiers of the 114th Military Police unit’s soldiers continued receiving the incorrect certain places payments through their last day of active duty. In addition, five soldiers continued to receive active duty pay and allowances after being discharged or returned from active duty. Instead of demobilizing on schedule with their unit, these five soldiers demobilized individually earlier due to various reasons. According to DFAS guidance, Fort Campbell, the designated demobilization station for the 114th Military Police Company, was responsible for stopping active duty pay for the unit’s demobilizing soldiers. However, when these individual soldiers were released from active duty, Fort Campbell processed discharge orders but Fort Campbell’s finance office failed to stop their pay. Further, in at least one case in which documentation was available, state USPFO military pay personnel did not immediately detect the overpayments in monthly pay system mismatch reports. For these five soldiers, overpayments continued for up to 3 months. One of these soldiers was discharged early because of drug-related charges. However, his pay continued for 3 months past his discharge date. By the time the USPFO stopped the active duty pay, the former soldier had received overpayments of about $9,400. Although the state USPFO military pay personnel stopped the active duty pay in September 2002, no attempt to collect the overpayment was made until we identified the problem. In July 2003, state military pay personnel initiated collection for the overpayment. Another soldier was discharged on July 8, 2002, for family hardship reasons, but his active duty pay was not stopped until August 15, resulting in an overpayment. Another 114th Military Police soldier was returned from active duty on September 11, 2002, for family hardship reasons, but his active duty pay was not stopped until November 30, resulting in an overpayment of about $8,600. Another soldier, facing disciplinary proceedings related to a domestic violence incident, agreed to an early discharge on May 22, 2002. However, the soldier’s active duty pay was not stopped until the unit administrative officer, while deployed in Cuba, reviewed the unit commander’s finance report and discovered the soldier still on company pay records and reported the error. Following his discharge, this soldier continued to receive active duty pay until August 31, resulting in an overpayment. The 200th Military Police Company was called to active duty in support of Operation Noble Eagle on October 1, 2001, for a period not to exceed 365 days. The unit, including 90 soldiers who received orders to mobilize with the 200th Military Police Company, reported to its home station, Salisbury, Maryland, on October 1, 2001, and then proceeded to Camp Fretterd located in Reisterstown, Maryland, for the soldier readiness program (SRP) in- processing. On October 13, 2001, they arrived at their designated mobilization station at Fort Stewart, Georgia, where they remained for the next 2 weeks undergoing additional in-processing. The unit performed general military police guard duties at Fort Stewart until December 15, 2001, when 87 of the soldiers in the unit were deployed to guard the Pentagon. The company arrived at Ft. Eustis, Virginia, in late August 2002 and was released from active duty on September 30, 2002. In addition, 3 of the 90 soldiers who received orders from the 200th Military Police Company were deployed in January 2002 to Guantanamo Bay, Cuba, to perform base security and guard duties with Maryland’s 115th Military Police Company. These soldiers demobilized at Fort Stewart, Georgia, where they were released from active duty on July 10, 2002. A time line of key actions associated with the unit’s mobilization under Operation Noble Eagle is shown in figure 14. As summarized in table 20, the majority of soldiers from the company experienced some sort of pay problem during one or more phases of the three phases of their active duty mobilization. Overall, 83 of the company’s 90 soldiers experienced at least one pay problem associated with their mobilization in support of Operation Noble Eagle. Pay problems included overpayments, underpayments, and late payments of entitlements, such as basic pay, basic allowance for housing, basic allowance for subsistence, family separation allowance and hardship duty pay associated with their initial mobilization, deployment to Fort Stewart, the Pentagon, and Cuba; and demobilization from active duty status. In total, we identified estimated overpayments of $74,000, underpayments of $11,000, and late payments of $10,000, associated with the pay problems we identified. Of the estimated $74,000 in identified overpayments, we identified about $32,000 that was subsequently collected from the unit’s soldiers. Specifically, we determined that 75 soldiers were overpaid, underpaid, and/or paid late during the period of mobilization, including a soldier who did not receive correct payments for up to 7 months after the mobilization date; 64 soldiers experienced pay problems during their tour of active duty related to the proper payment of basic pay, basic allowance for subsistence, basic allowance for housing, family separation allowance, and location-based pays such as hardship duty pay; and 3 soldiers experienced pay problems during their demobilization from Fort Stewart related to continuation of active duty pay entitlements after they were released early from active duty. We identified a number of causes associated with these pay problems, including delays in submitting documents, incorrect data entry, and limited personnel to process the mass mobilizations. Maryland’s USPFO officials told us they had not experienced a large-scale mobilization to active duty in more than 10 years. As summarized in table 21, we identified a number of pay problems associated with eight different types of active duty pays and allowances associated with the unit’s mobilization to active duty. Seventy-five of 90 soldiers from the 200thth Military Police Company did not receive the correct or timely entitlements related to basic pay, basic allowance for housing, basic allowance for subsistence, or family separation allowance when called to active duty. Thirteen soldiers received overpayments because they continued to receive pay after they were released early from active duty. These soldiers mobilized on October 1, 2001, and then received amended orders to be released from active duty around October 13, 2001. However, many continued to receive basic pay, basic allowance for subsistence, basic allowance for housing, and family separation allowance payments through the end of November 2001. The unit administrator stated that many of these soldiers received amended orders after their initial mobilization when it was determined that they were not deployable for a variety of reasons, such as health or family problems. The overpayments occurred because the Maryland Army Guard command was not informed by either unit personnel or the active component that individuals (1) did not deploy or (2) were released from active duty early. The Maryland Army Guard command initiated amendment orders to stop the active duty pays when it became aware of the problem; however, the orders were not generated in time for the USPFO to stop active duty pays in the system. Specifically, in order for pay to be stopped by October 13, 2001, the USPFO must have received and processed the amended orders by October 8, 2001. However, the Maryland Army Guard command did not generate many of the amended orders until November 14, 2001, at which time they would have been sent to the unit and then forwarded to the USPFO too late to meet the pay cutoff. An additional soldier was issued an amended order to release him from active duty on October 13, 2001. Upon our review of his pay account, we determined that he continued to receive active duty pay and allowances for an entire year. We spoke with the unit administrator about this soldier and determined that he mobilized with the unit and was deployed for the entire year that he was paid. The unit administrator and Maryland Army Guard command, along with the USPFO pay officials, were not sure why the amendment order was never processed. They believe that the amendment fell through the cracks due to the general confusion and the limited personnel processing the mass mobilizations after September 11, 2001. Based on our inquiries, the Maryland Army Guard command generated an amendment on August 21, 2003, to reinstate the original order to avoid future questions regarding the soldier’s tour of duty. Further, 42 soldiers from the unit were underpaid their entitled family separation allowance when they mobilized. Soldiers are entitled to receive a family separation allowance after they have been deployed away from home for more than 30 days. We found that these underpayments occurred as a result of Maryland USPFO military pay officials’ errors in calculating the start and stop dates for this allowance. Several soldiers did not receive the correct type of basic allowance for housing after being mobilized as specified in the DOD FMR, Volume 7A, chapter 26. We were unable to determine specific causes and amounts of all the unit’s problems associated with the basic allowance for housing because the unit had remobilized in July 2003 and some of the historical records relating to housing entitlements applicable to the prior mobilization could not be located. Furthermore, the original unit administrator had retired, leaving limited records of the prior mobilization for the current unit administrator. Based on our inquiries, we determined that some soldiers were underpaid their housing allowance because the Maryland USPFO military pay officials entered the incorrect date for the tour and therefore shortened the unit’s soldiers’ allowance by 1 day. Other soldiers did not receive the correct amount for this allowance as a result of different interpretations of how to enter “dependent” information provided on housing allowance application forms (Form 5960). According to personnel officials, married soldiers are required to write in their spouses’ names as dependents on Form 5960 in order to receive the higher housing allowance amount. However, guidance did not clearly specify that simply checking the box indicating that they are married is not sufficient support to receive the higher housing allowance (with dependents) rate. As a result, several soldiers’ dependent information was not loaded into the personnel system correctly, and they were paid a single rate housing allowance instead of the higher married rate allowance. Other soldiers did not receive the correct housing allowance because they did not turn in complete forms and documentation to initiate the correct allowance rate or were late in turning in documents. For example, one soldier, who appeared to have submitted his lease agreement 6 days after being called to active duty, did not receive the correct housing allowance amount for the first 2 months of active duty. During his entire deployment, the soldier attempted to get various unit and military pay officials to take action to initiate back pay for these housing allowance underpayments, including forwarding copies of the lease agreement as proof for payment on three different occasions. As of March 30, 2003, the soldier had not received the correct housing allowance for October and November 2001. Another soldier did not receive the correct amount of housing allowance after his mobilization and complained to the unit administrator. Seven months after his initial mobilization to active duty, finance officials at the active duty station in Fort Belvoir, Virginia, who were attempting to correct the soldier’s housing allowance instead inadvertently entered a transaction to collect the entire amount of the housing allowance previously paid to the soldier. Finance officials at Fort Belvoir subsequently entered a transaction to reverse the error and pay the soldier a “catch-up” housing allowance payment. As summarized in table 22, we identified a number of pay problems associated with five different types of active duty pays and allowances associated with the unit’s deployment. Sixty-two soldiers from the unit were overpaid their entitled subsistence allowance by active Army finance personnel while stationed at the Pentagon during the period of December 15, 2001, through December 31, 2001. Prior to this period, the soldiers were stationed at Fort Stewart and were not provided lodging or mess and properly received the full subsistence allowance. When the unit was redeployed to the Pentagon, mess facilities became available. However, active Army finance personnel did not reduce the unit’s subsistence allowance rate to reflect the available mess facilities. According to DOD FMR, Volume 7A, chapter 25, enlisted soldiers are not entitled to the full subsistence allowance when mess facilities are provided. In January 2002, three soldiers who received mobilization orders from the 200th MP Company left Fort Stewart and traveled with the 115th Military Police Company to Guantanamo Bay, Cuba, to assist with base security and guard duties. While in Cuba, the soldiers were either underpaid, or were late in receiving their entitled hardship duty pays. In accordance with DOD FMR Volume 7A, chapter 17, soldiers who perform duties in “designated areas” for over 30 days are entitled to hardship duty pay. The FMR specifies Guantanamo Bay, Cuba, as a designated area and provides payment of $50 a month to soldiers serving there. While deployed to Cuba, the three soldiers were mistakenly paid the old type of hardship duty pay. Since hardship duty pay is not an automated pay, the active Army finance office at Guantanamo Bay was required to manually enter the “designated areas” payment each month for each soldier. While they were in Cuba, the three soldiers did not receive all their entitled hardship duty pays. Furthermore, the hardship duty pays they did receive were more than 30 days late. The 200th Military Police Company returned to Fort Eustis around the end of August 2002 to begin the demobilization process. We did not identify any pay issues associated with the unit’s soldiers who were released from active duty on September 30, 2002 (the original date for the unit’s demobilization, designated on the mobilization orders). However, as shown in table 23, we did identify three soldiers who continued to receive active duty pay after their early release from active duty. Specifically, the three soldiers from the unit returned from Cuba, demobilized at Fort Stewart, and were released from active duty on July 10, 2002, while their original orders showed a September 30, 2002, release date. They continued to receive active duty pay and allowances through July 15, 2002. Fort Stewart did not provide the amended orders with the earlier release date to the Maryland USPFO office in time to stop the pay. On October 2, 2001, California’s 49th Military Police Headquarters and Headquarters Detachment (HHD) was mobilized to active duty for a period not to exceed 24 months. The 49 th MP HHD mobilized at its home station, Pittsburg, California, and then proceeded to its designated mobilization station, Fort Lewis, Washington, on October 12, 2001. The unit performed its active duty mission at Fort Lewis, where it provided base security as part of Operation Noble Eagle. The unit was demobilized from active duty at Fort Lewis on July 28, 2002. A time line of the unit’s actions with respect to its mobilization under Operation Noble Eagle is shown in figure 15. Almost all soldiers from the 49th Military Police Company experienced some sort of pay problem during one or more phases of the three phases of the active duty mobilization. Overall, 50 of the 51 soldiers with the unit had at least one pay problem associated with their mobilization to active duty in support of Operation Noble Eagle. These pay problems included not receiving pays and allowances at all (underpayments), receiving some pays and allowances over 30 days after entitlement (late payments), and the overpayment of allowances. Specifically, as summarized in table 24, we found that (1) 48 soldiers did not receive certain pay and allowances within 30 days of their initial mobilization entitlement and (2) 41 soldiers did not receive, or did not receive within 30 days, the pay and allowances they were entitled to receive during their deployment. In total, we identified estimated overpayments of $17,000, underpayments of $1,300, and late payments of $67,000 associated with the pay problems we found. In addition, of the $17,000 in overpayments, we found that less than $100 was subsequently collected from the soldiers. We determined a number of causes for these pay problems. First, we found a lack of sufficient numbers of knowledgeable staff. In addition, after-the- fact detective controls were not in place, including a reconciliation of pay and personnel records and the reconciliation of pay records with the unit commander’s records of personnel actually onboard. Currently, as a matter of practice, pay and personnel representatives from the USPFO conduct a manual reconciliation between the pay and personnel system records approximately every 2 months. The purpose of the reconciliation is to ensure that for common data elements, the pay and personnel systems contain the same data. A USPFO official told us that while it is the USPFO’s goal to carry out such reconciliations each month, it currently does not have the resources required to do so. As summarized in table 25, we identified a number of pay problems associated with the unit’s mobilization to active duty. Failures to enter transactions or late entry of transactions needed to start active duty pays by Army Guard USPFO military pay personnel and by active Army military pay personnel at the unit’s mobilization station were the initial cause of the pay problems. We also found that the underlying cause of the pay problems was a lack of sufficient numbers of knowledgeable personnel at the California USPFO and the Fort Lewis Finance Office. In addition, according to Army Guard and active Army officials, neither organization was prepared for the sheer volume of pay transactions associated with mobilizing soldiers to active duty. In total, 48 out of 51 soldiers of the 49th Military Police Company did not receive certain pay and allowances and incentive pays at all, or did not receive them within 30 days after being mobilized on October 2, 2001. The types of pay entitlements either not paid at all or paid late associated with the unit’s initial mobilization included basic pay, basic allowance for subsistence, basic allowance for housing, family separation allowance, and the continental United States cost of living allowance. The late payments during the mobilization phase primarily resulted from California USPFO military pay personnel’s lack of understanding of their responsibility for initiating active duty pays. According to DFAS reserve component mobilization procedures; the California USPFO was responsible for initiating these pays. However, a USPFO military pay official mistakenly instructed the unit to take its pay data to the mobilization station to enter transactions to start active duty pays. The USPFO official stated that the USPFO did not start the active duty pay and allowances at that time because a copy machine was not available to make copies of relevant active duty pay support documentation (such as a lease agreement needed to support a housing allowance entitlement). As a result, the responsibility for initiating this allowance was improperly passed to the active Army finance office at the Fort Lewis mobilization station. The Fort Lewis finance office lacked sufficient numbers of knowledgeable military pay staff to expeditiously enter the large volume of transactions necessary to start active duty pay entitlements for the 49th Military Police Company’s soldiers. DFAS guidance requires finance personnel at the mobilization station to review each soldier’s pay account to identify any errors and input the necessary correcting transactions into DJMS-RC. Initially, the mobilization station finance office assigned an insufficient number of personnel to the task of starting active duty pays for the unit’s 51 mobilizing soldiers. Moreover, one of the assigned pay technicians was not familiar with DJMS-RC and consequently entered data incorrectly for some of the unit’s soldiers. Also, the assigned pay technician initially failed to enter transactions to start pay and allowances for a significant number of the unit’s soldiers because the supporting documentation was misplaced. These documents were later found under a desk in the finance office. Recognizing this shortage of staff knowledgeable about DJMS-RC processing procedures, the Fort Lewis finance office asked the California USPFO to supply additional personnel and also temporarily reassigned soldiers from other units stationed at Fort Lewis to assist in the pay processing. Working together over a 2-month period after the unit was mobilized to active duty, these personnel were able to enter the omitted transactions needed to start active duty pays and correct the previous erroneous entries. In addition, the USPFO did not enter the required data to DJMS-RC to begin cost of living allowance pays for 36 of the unit’s soldiers. DFAS reserve component mobilization procedures state that the USPFO has the initial responsibility for initiating these pays. However, as discussed previously, the USPFO mistakenly sent the 49th Military Police Company to Fort Lewis with their pay documentation, and as a result, it was not until more than 2 months after the unit’s mobilization date that the Fort Lewis finance office pay technicians began to enter these transactions into DJMS-RC. The company commander for the unit told us that he was frustrated with the level of customer support his unit received as it moved through the initial mobilization process. Only two knowledgeable military pay officials were present to support active duty pay transaction processing for the 51 soldiers mobilized for his unit. He characterized the customer service his unit received at initial mobilization as very time-consuming and frustrating. As summarized in table 26, we identified a number of pay problems associated with six different types of active duty pays and allowances associated with the unit’s deployment while on active duty. These problems primarily resulted from a data entry error and inadequate document retention practices. For example, the USPFO paid one soldier her basic pay, basic allowance for subsistence, and basic allowance for housing nearly 4 months late. A USPFO official told us these late payments were caused when a USPFO pay technician entered an incorrect stop date for the soldier’s active duty tour into DJMS-RC. The pay technician, after being notified of the error by the soldier, corrected the data in DJMS-RC, which resulted in the soldier receiving her pay nearly 4 months late. Additionally, USPFO officials were unable to provide support explaining why five other soldiers continued to receive basic pay, the basic allowance for subsistence, and the basic allowance for housing after the date available records show their active duty tours had ended. Consequently, we identified the payments made to these five soldiers as overpayments. Overpayments of family separation allowances to soldiers in the unit resulted from a data entry error and inadequate USPFO document retention practices. A USPFO pay technician incorrectly coded a soldier’s account to receive a family separation allowance when the soldier had only been on active duty for 2 weeks. According to the DOD FMR, Volume 7A, chapter 27, soldiers are only eligible for this allowance after they have been separated more than 30 days from their families on a continuous active duty assignment. This overpayment problem had not been resolved as of March 31, 2003. Additionally, USPFO officials were unable to provide supporting documentation explaining why five soldiers continued to receive a family separation allowance after available documentation showed that these soldiers’ active duty tours had officially ended. We identified these family separation allowance payments for the five soldiers as overpayments. Late, under- and overpayments of foreign language proficiency pays to the unit’s soldiers primarily resulted from delayed or inadequate data entry. For example, our audit showed that USPFO pay technicians failed to enter transactions into DJMS-RC in a timely manner for four soldiers resulting in late foreign language proficiency payments. In addition, USPFO pay technicians failed to enter any foreign language proficiency payment transactions for 1 month for one soldier and for 3 months for another soldier resulting in those soldiers being underpaid. This underpayment issue had not been resolved as of March 31, 2003. In another instance, a soldier received an overpayment of his entitled foreign language proficiency payment when a USPFO pay technician entered the wrong code. Approximately 3 months later, the USPFO pay technician identified the error and recovered the overpayment. Late payment, underpayment, and overpayment of cost of living allowances resulted from the inability of DJMS-RC to pay certain active duty pays and allowances automatically, inaccurate data entry, and inadequate documentation retention practices. For example, our audit discovered that USPFO pay technicians failed to manually enter cost of living allowance transactions into DJMS-RC in a timely manner for 37 soldiers, resulting in late payments to the soldiers. In addition, USPFO officials were unable to provide sufficient documentation to explain why 3 soldiers appeared not to have received cost of living allowance payments due them for a 2-month period. We considered these pay omissions to be underpayments. An Army pay technician at the Fort Lewis finance office entered the incorrect code, thereby paying a soldier the wrong type of allowance, which resulted in an underpayment. California’s 49th Military Police Company demobilized at Fort Lewis on July 28, 2002, and returned to its home station in Pittsburg, California. We did not identify any pay problems for this unit in the demobilization phase. To obtain an understanding and assess the processes, personnel (human capital), and systems used to provide assurance that mobilized Army Guard soldiers were paid accurately and timely, we reviewed applicable policies, procedures, and program guidance; observed pay processing operations; and interviewed cognizant agency officials. With respect to applicable policies and procedures, we obtained and reviewed 10 U.S.C. Section 12302, DOD Directive Number 1235.10, “Activation, Mobilization & Demobilization of the Ready Reserve;” DOD FMR, Volume 7A, “Military Pay Policy and Procedures Active Duty and Reserve Pay”; and the Army Forces Command Regulations 500-3-3, Reserve Component Unit Commander Handbook, 500-3-4, Installation Commander Handbook, and 500-3-5, Demobilization Plan. We also reviewed various Under Secretary of Defense memorandums, a memorandum of agreement between Army and DFAS, DFAS, Army, Army Forces Command, and Army National Guard guidance applicable to pay for mobilized reserve component soldiers. We also used the internal controls standards provided in the Standards for Internal Control in Federal Government. We applied the policies and procedures prescribed in these documents to the observed and documented procedures and practices followed by the various DOD components involved in providing active duty pays to Army Guard soldiers. We also interviewed officials from the National Guard Bureau, State USPFOs, Army and DOD military pay offices, as well as unit commanders to obtain an understanding of their experiences in applying these policies and procedures. In addition, as part of our audit, we performed a review of certain edit and validation checks in DJMS-RC. Specifically, we obtained documentation and performed walk-throughs associated with DJMS-RC edits performed on pay status/active duty change transactions, such as those to ensure that tour start and stop dates match MMPA dates and that the soldier cannot be paid basic pay and allowances beyond the stop date that was entered into DJMS-RC. We also obtained documentation on and walk-throughs of the personnel-to-pay system interface process, the order writing-to-pay system interface process, and on the process for entering mobilization information into the pay system. We held interviews with officials from the Army National Guard Readiness Center, the National Guard Bureau, and DFAS Indianapolis and Denver to augment our documentation and walkthroughs. Because our preliminary assessment determined that current operations used to pay mobilized Army Guard soldiers relied extensively on error- prone manual transactions entry into multiple, nonintegrated systems, we did not statistically test current processes and controls. Instead, we used a case study approach to provide a more detailed perspective of the nature of pay deficiencies in the three key areas of processes, people (human capital), and systems. Specifically, we gathered available data and analyzed the pay experiences of Army Guard special forces and military police units mobilized to active duty in support of Operations Noble Eagle and Enduring Freedom during the period from October 2001 through March 2003. We audited six Army Guard units as case studies of the effectiveness of the controls over active duty pays in place for soldiers assigned to those units: Colorado B Company, 5th Battalion, 19th Special Forces; Virginia B Company, 3rd Battalion, 20th Special Forces; West Virginia C Company, 2nd Battalion, 19th Special Forces; Mississippi 114th Military Police Company; California 49th Military Police Headquarters and Headquarters Maryland 200th Military Police Company. In selecting these six units for our case studies, we sought to obtain the pay experiences of units assigned to either Operation Enduring Freedom or Operation Noble Eagle. We further limited our case study selection to those units both mobilized to active duty and demobilized from active duty during the period from October 1, 2001 through March 31, 2003. From the population of all Army Guard units mobilized and demobilized during this period, we selected three special forces units and three military police units. These case studies are presented to provide a more detailed view of the types and causes of pay problems and the pay experiences of these units as well as the financial impact of pay problems on individual soldiers and their families. We used mobilization data supplied by the Army Operations Center to assist us in selecting the six units we used as our case studies. We did not independently verify the reliability of the Army Operations Center database. We used the Army Operations Center data to select six states that had a large number of special forces or military police units that had been mobilized, deployed, and returned from at least one tour of active duty in support of Operations Noble Eagle and Enduring Freedom. We chose California, Colorado, Maryland, Mississippi, Virginia, and West Virginia. From these six states, we selected three special forces and three military police units that had a variety of deployment locations and missions. We also identified and performed a limited review of the pay experiences of a unit still deployed during the period of our review; Colorado’s 220th Military Police Company. The purpose of our limited review was to determine if there were any pay problems experienced by a more recently mobilized unit. We also obtained in-depth information from soldiers at four of the six case study units. Using a data collection instrument, we asked for soldier views on pay problems and customer service experiences before, during, and after mobilization. Unit commanders distributed the instrument to soldiers in their units. There were 325 soldiers in these units; in total, we received 87 responses. The information we received from these data collection instruments is not representative of the views of the Army Guard members in these units nor of those of Army Guard members overall. The information provides further insight into some of the pay experiences of selected Army Guard soldiers who were mobilized under Operations Noble Eagle and Enduring Freedom. We used DJMS-RC pay transaction extracts to identify pay problems associated with our case study units. However, we did not perform an exact calculation of the net pay soldiers should have received in comparison with what DJMS-RC records show they received. Rather, we used available documentation and follow-up inquiries with cognizant USPFO personnel to identify if (1) soldiers’ entitled active duty pays and allowances were received within 30 days of initial mobilization date, (2) soldiers were paid within 30 days of the date they became eligible for active duty pays and allowances associated with their deployment locations, and (3) soldiers stopped receiving active duty pays and allowances as of the date of their demobilization from active duty. As such, our audit results only reflect problems we identified. Soldiers in our case study units may have experienced additional pay problems that we did not identify. In addition, our work was not designed to identify, and we did not identify, any fraudulent pay and allowances to any Army Guard soldiers. As a result of the lack of supporting documents, we likely did not identify all of the pay problems related to the active duty mobilizations of our case study units. However, for the pay problems we identified, we counted soldiers’ pay problems as a problem only in the phase in which they first occurred even if the problems persisted into other phases. For purposes of characterizing pay problems for this report, we defined over- and underpayments as those pays or allowances for mobilized Army Guard soldiers during the period from October 1, 2001, through March 31, 2003, that were in excess of (overpayment) or less than (underpayment) the entitled payment. We considered as late payments any active duty pays or allowances paid to the soldier over 30 days after the date on which the soldier was entitled to receive such pays or allowances. As such, these payments were those that, although late, addressed a previously unpaid entitlement. We did not include any erroneous debts associated with these payments as pay problems. In addition, we used available data to estimate collections against identified overpayments through March 31, 2003. We did not attempt to estimate payments received against identified underpayments. We provided the support for the pay problems we identified to appropriate officials, at each of our case study locations so that they could fully develop and resolve any additional amounts owed to the government or to the Army Guard soldiers. We briefed DOD and Army officials, National Guard Bureau officials, DFAS officials, and USPFO officials in the selected states on the details of our audit, including our findings and their implications. On October 10, 2003, we requested comments on a draft of this report. We received comments on November 5, 2003, and have summarized those comments in the “Agency Comments and Our Evaluation” section of this report. DOD’s comments are reprinted in appendix VIII. We conducted our audit work from November 2002 through September 2003 in accordance with U.S. generally accepted government auditing standards. GAO DRAFT REPORT DATED OCTOBER 10, 2003 GAO-04-89 (GAO CODE 192080) “MILITARY PAY: ARMY NATIONAL GUARD PERSONNEL MOBILIZED TO ACTIVE DUTY EXPERIENCED SIGNIFICANT PAY PROBLEMS” RECOMMENDATION 1: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service (DFAS), in conjunction with the Under Secretary of Defense (Comptroller), to establish a unified set of policies and procedures for all Army Guard, Army, and DFAS personnel to follow for servicing active duty pays for Army Guard personnel mobilized to active duty. (p. 74/GAO Draft Report) DoD RESPONSE: Concur. DFAS and the Army are jointly building on the existing guidance procedures as published in FORSCOM REG 500-3-3, (FORSCOM Mobilization and Deployment Planning System Form Deps, Volume 3, Reserve Component Commanders’ Handbook dated July 15,1999); the National Guard Standard Operating Procedure Contingency Operations; and DFAS AIG Message dated December 19, 2002, Subject: Reserve Component- Mobilization Procedures, to clearly define the roles and responsibilities between mobilization/demobilization stations, United States Property and Fiscal Offices (USPFOs), and deployed Army finance elements. A joint task force has been established to review existing procedural guidance, lessons learned to date, and available metrics. As a first step, expanded central guidance will be published within the next 30 days, which will further articulate the specific responsibilities of the servicing finance activities. This breakout of responsibilities will also be provided in a simple matrix form to visually reinforce this guidance. Within approximately 60 days, the Army and DFAS will begin compliance reviews of the mobilization/demobilization stations to ensure adherence to published guidance and to provide any further assistance these offices may require. Within the next 3 to 6 months, the task force will build upon the existing guidance to provide comprehensive procedures and related standards, down to the individual technician level, for all offices and units responsible for pay input support of mobilized soldiers. RECOMMENDATION 2: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to establish performance measures for obtaining supporting documentation and processing pay transactions. (p. 75/Draft Report) DoD RESPONSE: Concur. Standards for the timeliness of processing pay transactions are currently in place for units, finance offices, and central site. However, these standards are focused on the full range of transactions and associated unit level data is generated based on the normal permanent/home station relationship with a Reserve Component Pay Support Office. Within the next 6 months, DFAS and the Army will jointly review how these existing mechanisms can be used to more succinctly capture data specifically related to mobilized soldiers and units. RECOMMENDATION 3: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to establish who is accountable for stopping active duty pays for soldiers who return home separate from their units. (p. 75/Draft Report) DoD RESPONSE: Concur. Within the next 30 days, DFAS, in cooperation with the Army, will reinforce existing procedures on responsibilities for stopping active duty pays for soldiers who return home separate from their units. This will be part of the revised guidance identified in response to recommendation one. In addition, mechanisms have been established to perform automated comparisons of personnel demobilization records and the Defense Joint Military Pay System - Reserve Component (DJMS-RC) to identify any demobilizing soldiers whose tours in the pay system were not adjusted to coincide with the demobilization date. RECOMMENDATION 4: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to clarify the policies and procedures for how to properly amend active duty orders, including medical extensions. (p. 75/Draft Report) DoD RESPONSE: Concur. For medical extensions, the Army published revised guidance on June 10, 2003, reinforcing procedures on this process. Included were the requirements for publishing orders prior to the end date of the current active duty tour. Concerning the specific case in Colorado cited by the GAO, DFAS and the Army have implemented changes to the input systems to warn the operator processing a tour cancellation when the correct input should be a tour curtailment. Action is complete. RECOMMENDATION 5: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to require Army Guard commands and unit commanders to carry out complete monthly pay and personnel records reconciliations and take necessary actions to correct any pay and personnel record mismatches found each month. (p. 75/Draft Report) DoD RESPONSE: Concur. Within 60 days, the Army will reinforce to all reserve commands the importance of this requirement. As noted by the GAO, this requirement is already included in US Army Forces Command Regulation 500-3-3, Unit Commander’s Handbook. RECOMMENDATION 6: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to update policies and procedures to reflect current legal and DoD administrative requirements with Enclosure respect to active duty pays and allowances and transaction processing requirements for mobilized Army Guard soldiers. (p. 75/Draft Report) DoD RESPONSE: Concur. In Fiscal Year 2004, DFAS, the Army, and National Guard will respectively update the cited regulations under their cognizance to the most current and accurate requirements. RECOMMENDATION 7: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to consider expanding the scope of the existing memorandum of understanding between DFAS and the Army concerning the provisions of resources to support surge processing at mobilization and demobilization sites to include providing additional resources to support surge processing for pay start and stop transactions requirements at Army Guard home stations during initial soldier readiness programs. (p. 75/Draft Report) DoD RESPONSE: Concur. The Army will work with the National Guard on resourcing the USPFOs for mobilization/demobilization surges. However, the memorandum of understanding between DFAS and the Army pertains only to the management and resourcing of Defense Military Pay Offices, to include their role in support of mobilization/ demobilization stations. As such, it is not the appropriate vehicle to address staffing of USPFO under the National Guard. RECOMMENDATION 8: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to determine whether issues concerning resource allocations for the military pay operations identified at our case study units exist at all 54 USPFOs, and, if so, take appropriate actions to address these issues. (p. 76/Draft Report) DoD RESPONSE: Concur. To support surge requirements, the National Guard could use additional National Guard soldiers being brought on active duty in a Temporary Tour of Active Duty status to augment the USPFO staff based on mobilization workload requirements. The additional requirement and funding will need to be addressed by the supplemental provided to Army. Normal manning at the USPFO, Military Pay Section is based on Full Time Support authorized state strength levels. RECOMMENDATION 9: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to determine whether concerns over relatively low graded military pay technicians identified at our case study units exist at all 54 USPFOs, and, if so, take appropriate actions to address these issues. (p. 76/Draft Report) Enclosure grade levels in the USPFOs’ Comptroller sections and the current grade levels for military pay technicians were validated as correct under OPM standards. Action is complete. RECOMMENDATION 10: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to modify existing training policies and procedures to require all USPFO and active Army pay and or finance personnel responsible for entering pay transactions for mobilized Army Guard soldiers to receive appropriate training upon assuming such duties. (p. 76/Draft Report) DoD RESPONSE: Concur. The National Guard has instituted mobilization specific training for pay technicians. The National Guard Financial Services Center quality assurance program is currently used to monitor completion of JUMPS Standard Terminal Input System (JUSTIS) training for USPFO military pay technicians. The US Army Reserve Command (USARC) has expanded training programs on DJMS-RC to help support the immediate training needs of deploying units and mobilization/demobilization stations. Over 35 training events have occurred since February 2002 in support of deploying units and mobilization/demobilization sites. The Army finance school is working with USARC to develop an exportable training package on DJMS-RC, which should be available within the next 6 months. Additionally, DFAS and the Army are sending a joint training team to Kuwait and Iraq in November 2003 to specifically address reserve component support. For the midterm (6 months to 2 years), the training on reserve component pay input for soldiers in finance battalions and garrison support units will be evaluated to determine how best to expand the training within the Army total training infrastructure, particularly in light of the planned integration of reserve and active component pay processing into a single system. The Army finance school is already evaluating the expansion of the current instruction on mobilized reserve component pay in the training curriculum for the finance advanced individual training course. RECOMMENDATION 11: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to require unit commanders to receive training on the importance of adhering to requirements to conduct annual pay support documentation reviews and carry out monthly reconciliations. (p. 76/Draft Report) DoD RESPONSE: Concur. The importance of conducting annual pay support documentation reviews and monthly reconciliations will be incorporated into precommand courses at the company level for the National Guard by the end of Fiscal Year 2004. RECOMMENDATION 12: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to establish an ongoing mechanism to monitor the quality and completion of training for both pay and finance personnel and unit commanders. (p. 76/Draft Report) DoD RESPONSE: Concur. The National Guard currently reviews the training status of military pay technicians at the USPFOs as part of the ongoing quality assurance review program. Enclosure The appropriate mechanism for monitoring the training of unit commanders and finance battalion personnel is dependent on the location of that training in the overall Army training infrastructure (i.e. unit training is assessed as part of the annual External Evaluation-ExEval) and, as such will be considered as part of the overall evaluation of the reserve pay training addressed in response to recommendation 10. RECOMMENDATION 13: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to identify and evaluate options for improving customer service provided to mobilized Army Guard soldiers by providing improved procedures for informing soldiers of their pay and allowance entitlements throughout their active duty mobilization. (p. 76/Draft Report) DoD RESPONSE: Concur. Within the next 30 days, the Army will prepare a standard information flyer to be given to all mobilizing reservists. The flyer will address entitlements as well as sources of pay support. The flyer will be published via Army Knowledge Online and incorporated into the overall revision to procedural guidance addressed in response to recommendation one. RECOMMENDATION 14: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to identify and evaluate options for improving customer service provided to mobilized Army Guard soldiers with respect to providing a single, well-advertised, source for soldiers and their families to access for customer service for any pay problems. (p. 77/Draft Report) DoD RESPONSE: Concur. The existing centralized information sources on individual soldiers pay will be expanded. Specifically, DFAS will continue to add functionality to myPay for input of discretionary actions. Additionally, DFAS is developing a separate view-only Personal Identification Number capability which soldiers will be able to give their dependents so they can see the Leave and Earning Statement without being able to change anything on the pay record. This enhancement is scheduled for August 2004. The DFAS also operates a central customer service center for pay inquiries for all Services. The toll free number for this center as well as the myPay internet address will be incorporated in the flyer discussed in response to recommendation 13 as well as continue being advertised in locations such as Army Knowledge Online. Until the implementation of DIMHRS, with full integration of pay and personnel, the processing of pay transactions will still require the movement of some entitlement information/authorization from units and personnel to finance via paper. As such, a network of finance support activities is required to geographically align with deployed combat and supporting personnel units. As always, pay remains essentially a command responsibility. For the individual soldier, the single source of pay support is his or her unit, which in-turn interfaces with the appropriate finance and personnel activities. For dependents of deployed soldiers, the single source for finance, or any administrative issues, is either the rear detachment of the soldiers’ deployed unit or, for the National Guard, the applicable State Family Assistance Coordinator. RECOMMENDATION 15: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to review the pay problems we identified at our six case study units to identify and resolve any outstanding pay issues for the affected soldiers. (p. 77/Draft Report) DoD RESPONSE: Concur. The National Guard Financial Services Center is working with each of the identified units and supporting USPFOs to ensure all pay issues are resolved. The Army and DFAS will continue to work the correction of any specific cases identified as still open for these units. As noted by the GAO, many of the cases identified have already been resolved or involved a delay in payment over 30 days from entitlement rather than an actual unresolved discrepancy. RECOMMENDATION 16: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of using the personnel-to-pay interface as a means to proactively alert pay personnel of actions needed to start entitled active duty pays and allowances. (p. 77/Draft Report) DoD RESPONSE: Concur. Within the next 6 months, we will evaluate the feasibility of using the personnel-to-pay interface as a means to proactively alert pay personnel of actions needed to start entitled active duty pays and allowances. RECOMMENDATION 17: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of automating some or all of the current manual monthly pays, including special duty assignment pay, foreign language proficiency pay, hardship duty pay and high altitude, low opening jump pay. (p. 77/Draft Report) DoD RESPONSE: Concur. Programming changes to DJMS-RC have been implemented to enhance the processes for special duty assignment pay and foreign language proficiency pay. However, monthly input is still required. Hardship duty pay is scheduled for implementation for April 2004. High altitude, low opening jump pay requires manual computation and input of a transaction for payment. The small volume of members entitled to this pay has not justified nor provided an adequate return on investment for this automation. DFAS has recognized the urgency of improving the military pay system capabilities supporting our Service members. A study was conducted of improvement alternatives in the fall of 2002, which concluded that a new commercial off the shelf based payroll capability (“Forward Compatible Payroll” (FCP)) was the best option to expeditiously improve our system payroll services. FCP is currently prototyping military entitlements and deductions and has already demonstrated that DJMS RC’s current monthly manual pays can be automated rapidly in the new commercial off the shelf based environment. RECOMMENDATION 18: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of eliminating the use of the “other credits” for processing Hardship Duty (Designated Areas); high altitude, low opening jump pay; and special duty assignment pay, and instead establishing a separate component of pay for each type of pay. (p. 77/Draft Report) DoD RESPONSE: Concur. Hardship duty pay is scheduled for automation in April 2004. We will also recommend inclusion of automation of high altitude, low opening jump pay in FCP. We acknowledge that the information available to the member is inadequate in today’s system. This has already been addressed in the FCP requirements. Each pay is designed to provide fully automated computation capability for active, Reserve/Guard and detailed leave and earnings statement reporting to the Service member through myPay. FCP will use legacy military pers/pay data feeds to create a single military pay record for each Service member supporting all Service component affiliations and duty statuses. FCP will resolve pay systems capability related problems described in this report. Until such time FCP has been implemented, we will ensure that these certain pays paid under “other credits” are included in the flyer addressed in response to recommendation 13. In addition, DFAS will update the DFAS Reserve Component Mobilization Procedures to mandate a remark be entered on the service member’s leave and earnings statement for pays paid under “other credits” to inform the service member exactly what entitlement(s) they have paid. RECOMMENDATION 19: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of using the JUSTIS warning screen to help eliminate inadvertent omissions of required monthly manual pay inputs. (p. 78/Draft Report) DoD RESPONSE: Concur. The National Guard will develop a JUSTIS table identifying all applicable soldiers in order to notify the USPFO technician of accounts requiring monthly entitlement input. This will be more efficient and effective than a pop-up warning screen, which would appear only if the individual soldier’s social security number were input. RECOMMENDATION 20: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of redesigning leave and earnings statement to provide soldiers with a clear explanation of all pay and allowances received so that they can readily determine if they received all and only entitled pays. (p. 78/Draft Report) Enclosure understanding their leave and earnings statement by reviewing and updating (as necessary) the information provided on our website(s); by providing independent leave and earnings statement remarks for present and future changes; continuing to provide the USPFOs ands Reserve Component Pay Support Offices with monthly newsletters; and effective immediately, provide the finance battalions/Defense Military Pay Offices with the National Guard newsletter. For the future, FCP is being designed with an easily understandable leave and earnings statement as one of the main requirements. Each pay is designed to provide fully automated computation capability for active, Reserve/Guard and detailed leave and earnings statement reporting through myPay. FCP will use legacy military pers/pay data feeds to create a single military pay record for each Service member supporting all Service component affiliations and duty statuses. FCP will also resolve pay systems capability related problems described in this report. RECOMMENDATION 21: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of establishing an edit check and requiring approval before processing any debt assessments above a specified dollar amount. (p. 78/Draft Report) DoD RESPONSE: Concur. The DFAS has already updated its current input system (Defense MilPay Office) to provide a warning to field finance personnel concerning the debt impact of tour cancellation (vice modification) for Reserve/Guard members. DJMS-RC would require a small to medium system change to edit debts that exceeded an established threshold or required approval. Secondary manual processing would be required to start the collection process or delete the debts. RECOMMENDATION 22: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service, in conjunction with the Under Secretary of Defense (Comptroller), as part of the current effort underway to reform DoD’s pay and personnel systems-referred to as DIMHRS - incorporate a complete understanding of the Army Guard pay problems as documented in this report into the requirements development for this system. (p. 78/Draft Report) DoD RESPONSE: Concur. The DFAS has provided detailed military pay requirements input to the DIMHRS Program that support fully automated computation of all military pay entitlements and deductions. The DIMHRS system military pay requirements submitted by DFAS would resolve system related pay problems as described in this report. DIMHRS is envisioned to create a single military personnel/pay record for each Service member supporting all Service component affiliations and duty statuses. Enclosure problems identified, but also with the human capital and process aspects when developing DIMHRS. (p. 78/Draft Report) DoD RESPONSE: Concur. The DFAS and Army have been actively involved in recommending an improved operational military pers/pay concept in the DIMHRS environment. Procedural changes are clearly required to capitalize on the opportunities afforded by a modern fully integrated personnel and pay system including improvements in process cycle time, customer service, and accountability. The DFAS is working with the Army DIMHRS Office to document existing workflow and roles and responsibilities. The DIMHRS Program is still in the very early stages of determining when and how integrated processes and workflows will be incorporated into the DIMHRS based operational concept. The DIMHRS “Joint Service Functional Concept of Operations,” dated July 15, 2003, page 14, indicates that the current plan is to “…initially mirror the existing ‘As-Is’ structure until the new capability has been fielded and risk factors/ requirements have been clearly identified. A determination of what additional skills and expertise are required for operators of a knowledge-based personnel community must be made after the capabilities of the commercial off the shelf product are fully known.” Staff making key contributions to this report include: Paul S. Begnaud, Ronald A. Bergman, James D. Berry, Jr., Amy C. Chang, Mary E. Chervenic, Francine M. DelVecchio, C. Robert DeRoy, Dennis B. Fauber, Jennifer L. Hall, Charles R. Hodge, Jason M. Kelly, Julia C. Matta, Jonathan T. Meyer, John J. Ryan, Rebecca Shea, Crawford L. Thompson, Jordan M. Tiger, Patrick S. Tobo, Raymond M. Wessmiller, and Jenniffer F. Wilson. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
In light of the recent mobilizations associated with the war on terrorism and homeland security, GAO was asked to determine if controls used to pay mobilized Army Guard personnel provided assurance that such pays were accurate and timely. GAO's audit used a case study approach to focus on controls over three key areas: processes, people (human capital), and systems. The existing processes and controls used to provide pay and allowances to mobilized Army Guard personnel are so cumbersome and complex that neither DOD nor, more importantly, the mobilized Army Guard soldiers could be reasonably assured of timely and accurate payroll payments. Weaknesses in these processes and controls resulted in over- and underpayments and late active duty payments and, in some cases, largely erroneous debt assessments to mobilized Army Guard personnel. The end result of these pay problems is to severely constrain DOD's ability to provide active duty pay to these personnel, many of whom were risking their lives in combat in Iraq and Afghanistan. In addition, these pay problems have had a profound financial impact on individual soldiers and their families. For example, many soldiers and their families were required to spend considerable time, sometimes while the soldiers were deployed in remote, combat environments overseas, seeking corrections to active duty pays and allowances. The pay process, involving potentially hundreds of DOD, Army, and Army Guard organizations and thousands of personnel, was not well understood or consistently applied with respect to determining (1) the actions required to make timely, accurate pays to mobilized soldiers, and (2) the organization responsible for taking the required actions. With respect to human capital, we found weaknesses including (1) insufficient resources allocated to pay processing, (2) inadequate training related to existing policies and procedures, and (3) poor customer service. Several systems issues were also a significant factor impeding accurate and timely payroll payments to mobilized Army Guard soldiers, including (1) non-integrated systems, (2) limitations in system processing capabilities, and (3) ineffective system edits.
In 1972, Congress passed FACA in response to a concern that federal advisory committees were proliferating without adequate review, oversight, or accountability. FACA states that Congress intended that the number of advisory committees be kept to the minimum necessary, and that the advisory committees operate under uniform standards and procedures in the full view of Congress and the public. Although Congress recognized the value of advisory committees to public policymaking, it included in FACA measures intended to ensure that (1) valid needs exist for establishing and continuing advisory committees, (2) the committees are properly managed and their proceedings are as open as possible to the public, and (3) Congress is kept informed of the committees’ activities. Congress ensured through FACA that the public had access to advisory committee information and activities, including charters, reports, and transcripts of committee meetings and other records. Under FACA, the President, the Director of the Office of Management and Budget (OMB), and agency heads are to control the number, operations, and costs of advisory committees. To help accomplish these objectives, FACA directed that a Committee Management Secretariat be established in OMB to be responsible for all matters relating to advisory committee administration. In 1977, the President transferred advisory committee functions from OMB to GSA. The President also delegated to GSA all of the functions vested in the President by FACA, except that the annual report to Congress required by section 6(c) of the act was to be prepared by GSA for the President’s consideration and transmittal to Congress. To fulfill its responsibilities, GSA has developed regulations and other guidance to assist agencies in implementing FACA, has provided training to agency officials, and was instrumental in creating and has collaborated with the Interagency Committee on Federal Advisory Committee Management. GSA is also in the process of linking an internet-based reporting system with its internal database that is used to track committee transactions. FACA requires that each agency head designate an advisory committee management officer to help manage the committees, and that designated federal officials shall be responsible for the individual committees. According to FACA, a committee’s designated federal official must approve or call a committee meeting, approve the agenda, and chair or attend each meeting. In February 1993, the President issued Executive Order 12838, which directed agencies to reduce by at least one-third, the number of discretionary advisory committees by the end of fiscal year 1993. Discretionary committees are those created under agency authority or authorized by Congress. OMB, in providing guidance to agencies on the executive order, established a maximum ceiling number of discretionary advisory committees for each agency and a monitoring plan. Under the guidance, agencies were to annually submit committee management plans to OMB and GSA. These plans were to include performance measures that were to be used to evaluate each committee’s goals or mission, information on new committees planned for the upcoming year, actions taken to maintain reduced committee levels, and the results of a status review of nondiscretionary committees, which are committees mandated by Congress or established by the President. OMB approval was required before the creation of new discretionary committees. Later, in 1995, OMB dropped the requirement for prior approval of new committees, as long as an agency was beneath its approved ceiling. Since fiscal year 1988, the number of federal advisory committees has declined. There were 1,020 advisory committees in fiscal year 1988. The number of advisory committees grew to 1,305 in fiscal year 1993 and then declined over the next several years to 963 committees in fiscal year 1997. This decrease occurred after the President’s February 1993 executive order to reduce the number of advisory committees. Advisory committees are made up of individuals, not organizations, and a total of 36,586 individuals served as members of the 963 committees in fiscal year 1997. Members of the 1,020 committees in fiscal year 1988 numbered 21,236 individuals. From fiscal years 1988 to 1997, the number of individuals serving on advisory committees had generally increased. Advisory committees incur costs to operate, and GSA reported that the cost to operate the 963 committees in fiscal year 1997 was about $178 million. However, the cost to operate the 1,020 committees in fiscal year 1988 was about $93 million. The costs incurred over the 9-year period were on a steady increase through fiscal year 1992, after which they began to increase only sporadically. In constant 1988 dollars, the costs to operate advisory committees went from about $93 million in fiscal year 1988 to about $136 million in fiscal year 1997. On average, between fiscal years 1988 and 1997, the number of members per advisory committee increased from about 21 to 38, and the cost per advisory committee increased from $90,816 to $184,868. In constant 1988 dollars, the average costs per advisory committee increased from $90,816 to $140,870 over the same period. Appendix I contains statistics on the number of federal advisory committees and their (unadjusted) costs and membership from fiscal years 1988 through 1997. In 1988, we reported that GSA had focused its oversight responsibilities under FACA on preparing the President’s annual reports to Congress and issuing guidance to agencies. We found that GSA had not appropriately ensured that (1) advisory committees were properly established, (2) committees were reviewed annually, (3) annual reports were submitted to the President before they were due to Congress, and (4) follow-up reports on presidential advisory committees’ recommendations were prepared for Congress. At that time, GSA attributed these shortcomings to insufficient staff and management inattention. The Secretariat is under the GSA Associate Administrator for Governmentwide Policy. For fiscal year 1997, the Secretariat had eight employees and a budget of $645,000 ($491,490 in constant 1988 dollars). It had five employees in September 1988 and a budget of $220,000 for fiscal year 1988. To determine whether GSA had ensured that federal advisory committees were established with complete charters and justification letters, we obtained from GSA advisory committee charters and justification letters that agencies had submitted from October 1, 1996, through July 21, 1997. The charters were for 203 committees, and the justification letters were for 107 of the 203 committees. GSA regulations require justification letters for discretionary committees (107 of the 203 committees) but not for nondiscretionary committees (96 of the 203 committees). We reviewed the charters and letters to determine whether each contained the items of information (e.g., the committee’s objectives and why the committee is essential to the agency) required by FACA and GSA regulations. If an item of information was missing from the charter or letter, we reviewed information in the applicable GSA file to ascertain whether the file documented that GSA acted to obtain the missing item. To determine whether GSA had comprehensively reviewed each advisory committee annually, we first requested from GSA the annual report for each of the 1,000 advisory committees that existed in fiscal year 1996. In total, we reviewed the annual reports for 978 advisory committees; the reports for 22 committees were missing from GSA’s files. According to GSA regulations, the Committee Management Secretariat is to make its annual review of each committee by using the committee’s annual report. We read the reports to see if they contained the information that GSA regulations prescribe. We then discussed with Secretariat officials how they used information from the reports to make comprehensive reviews. To determine whether GSA had submitted annual reports on advisory committees to the President in a timely manner, we examined documentation regarding when GSA had submitted annual reports to the President for fiscal years 1988 through 1996. We compared the dates on the letters GSA used to transmit the reports to the President with the date that FACA requires the President to report to Congress, which is December 31, or 3 months after the end of a fiscal year. To determine whether GSA had ensured that follow-up reports to Congress were prepared on recommendations by presidential advisory committees, we contacted agencies’ committee management officers to ascertain whether they knew of the follow-up requirement and whether the follow-up reports were submitted to Congress. According to GSA regulations, agencies are to prepare the reports and submit them to Congress, but GSA is to ensure that it is done. We discussed GSA’s role with Secretariat officials and contacted committee management officers for the 17 cases that were identified in the President’s annual reports to Congress for fiscal years 1995 and 1996 as requiring follow-up reports. These officers were employees of the 9 agencies that were accountable for the 17 cases. In general, to identify and understand GSA’s oversight responsibilities, we interviewed Secretariat officials and reviewed applicable laws, regulations, and GSA guidance to agencies regarding advisory committee activities. We did not assess the extent to which GSA provided the agencies with guidance on advisory committee activities beyond the guidance for establishing advisory committees, the comprehensive annual reviews, and the follow-up reports on presidential advisory committees. Also, we did not assess OMB’s role in dealing with advisory committees beyond reviewing its guidance to agencies for implementing Executive Order 12838. We did our work in Washington, D.C., between June 1997 and April 1998 in accordance with generally accepted government auditing standards. After we completed our work, the Secretariat provided a summary table of advisory committee data for the entire 1997 fiscal year. These data are to be included in the President’s annual advisory committee report for fiscal year 1997, and they were incorporated in this report for comparison to previous years. We requested comments on a draft of this report from the Administrator of GSA and the Director of OMB or their designees. Written comments provided by GSA are discussed near the end of this letter and are reproduced in appendix II. An OMB official responsible for federal advisory committee matters provided oral comments on May 13, 1998, which are discussed near the end of this letter. FACA and GSA regulations require that agencies consult with GSA before establishing advisory committees. As part of this consultation, FACA requires agencies to submit charters for all committees, and GSA regulations require them to also submit justification letters for discretionary advisory committees. These documents must contain specific information. FACA outlines that agencies are to include 10 specific items in the charter, including the committee’s objectives and scope of activities, the time period necessary to carry out its purpose, and the estimated annual staff years and cost. GSA regulations state that agencies must address three items in the justification letter: why the committee is essential to conduct the agency’s business, why the committee’s functions cannot be performed by the agency or other means, and how the agency plans to attain balanced membership. GSA’s role is to review agency proposals to establish advisory committees and determine whether FACA requirements and those imposed by regulation are met. The regulations say that GSA is to review the proposals and notify the agency of its views within 15 days, if possible. However, GSA does not have the authority to stop the formation of an advisory committee. Nor does it have the authority to terminate an existing committee. GSA can only recommend to the President, Congress, or an agency head that an advisory committee not be formed or that an existing committee not be continued. In our review of the 203 charters and 107 justification letters submitted to GSA from October 1, 1996, through July 21, 1997, we found that 36 percent of the charters and 38 percent of the letters were missing at least one item that was required by FACA or GSA regulations. Seventy-four charters were missing a total of 85 items, such as stating the period of time necessary for the committee to carry out its purpose and estimating annual operating costs and staff years. For the justification letters, 41 were missing a total of 88 items, such as a description of the agency’s plan to attain balanced membership. Appendix III shows the number of specific items that we found missing in the charters and justification letters. We found minimal evidence in GSA’s files to indicate that GSA raised questions about these missing items. GSA completed its reviews of the charters and notified the agencies, generally by letter, of its views within an average of 5 days and positively concurred in establishing the 203 advisory committees. Secretariat officials told us that, while they concurred with the need for the 203 committees, the agencies were responsible for ensuring that the charters and justification letters were properly done. These officials said that most charters and letters were done well and met the spirit of FACA. They also said that the problems that did exist relating to incomplete or inadequate charters and letters may have occurred due to review oversight by Secretariat analysts. The officials believed that some of the missing items of information were more significant than others. For example, they believed that missing information pertaining to estimated annual operating costs and staff years and a description of the agency’s plan to attain a fairly balanced membership were more significant than information on the agency or official to whom the committee reports and the time necessary for the committee to carry out its purpose. Nevertheless, the officials recognized that all of the required information should be in the charter and justification letters. They also said that they plan to provide the analysts with tools to better enable them to make comprehensive reviews. FACA requires GSA to make an annual comprehensive review of each advisory committee to determine whether it is carrying out its purpose, whether its responsibilities should be revised, and whether it should be abolished or merged with another committee. After completing the reviews, GSA is required to recommend to the President and to the agency head or Congress any actions GSA deems should be taken. GSA regulations require that agencies prepare an annual report for each committee, including the agencies’ recommendations for continuing, merging, or terminating committees. For continuing committees, the annual reports are to describe such things as how the committee accomplishes its purpose; the frequency (or lack) of meetings and the reason for continuing the committee; and why it was necessary to have closed committee meetings, if such meetings were held. The annual reports also are to include the committee’s costs. GSA’s regulations call for it to use the data it receives in the agencies’ annual reports, including the agencies’ recommendations to continue or terminate the committees, in conducting the comprehensive annual review. However, GSA did not use the data provided by the agencies to assess on its own whether committees were carrying out their purposes, whether their responsibilities should be revised, or whether the committees remain necessary. We reviewed 978 advisory committees’ annual reports that were submitted to GSA by the agencies for fiscal year 1996. We were unable to review another 22 reports that GSA reported receiving because they were missing from GSA’s files. For those annual reports that we reviewed, agencies generally reported the required information, with the exception of explaining why some continuing committees did not meet during the year. According to data GSA obtained from the annual reports, 212 advisory committees (about 21 percent of the total number of 1,000 committees) did not meet during fiscal year 1996. Agencies did not have to explain why no meetings were held for the 113 new and terminated committees in 1996. However, agencies were required to explain why the remaining 99 continuing committees did not meet. In our review of the 99 committees’ annual reports, we found that 47 gave reasons why the committees had not met, including reasons such as the committees’ having no agenda items to consider, lacking funding, and having delays in appointing members. Fifty-two annual reports did not explain why the committees had not met or why they should continue. We found no evidence that GSA had requested follow-up information on why the committees had not met or why the agencies believed that the committees should continue. Secretariat officials told us that they do not verify the agencies’ data, and that they accept the data without further review, including the agencies’ recommendations to continue, merge, or terminate committees. The officials said they could not undertake reviews on their own because they do not have the expertise or program knowledge to determine which committees should be continued or terminated. Regardless of whether this is the case, we believe that it is incumbent upon Secretariat officials to follow up with agencies to determine why committees have not met before accepting agencies’ recommendations that the committees be continued. Secretariat officials also told us that they have held discussions with congressional staff about the possibility of reducing the number of committees mandated by Congress that may no longer be warranted. Such committee terminations would require legislation. Although we did not evaluate this issue as a part of this review, we believe it illustrates the benefits to GSA of following up with agencies when they do not report why committees did not meet or why the agencies believed the committees should continue. For example, of the 52 committees that did not explain why they did not meet in fiscal year 1996, 25 were mandated by Congress. By delving into the specifics of why no meetings were held, GSA might develop information to assist Congress in determining the potential for terminating some congressionally mandated committees by clarifying reasons to continue them. We recognize that there are legitimate reasons why committees may not meet in any given year. The President is required to report annually to Congress on the activities, status, and changes in the composition of advisory committees. The annual reports are due to Congress by December 31 for each preceding fiscal year. GSA prepares the annual reports for the President on the basis of information provided in agencies’ annual advisory committee reports. GSA did not submit most of its annual reports to the President in time for him to meet the December 31 reporting date to Congress. For seven of the last nine annual reports, covering fiscal years 1988 through 1996, GSA transmitted the reports to the President after they were due to Congress. In the last 4 years, one report was delivered to the President 5 days before the due date to Congress; three reports were delivered, on average, about 3 months after the due date. As of April 27, 1998, GSA had not submitted the fiscal year 1997 report to the President. According to Secretariat officials, the December 31 reporting date to Congress is unattainable because, among other things, agencies have other end of fiscal year reporting requirements, in addition to the advisory committee reports. Secretariat officials also told us that they plan to ask Congress for a later reporting date. We did not examine the reasonableness of the December 31 reporting date. FACA requires the President, or his delegate, to report to Congress within 1 year on his proposals for action or reasons for inaction on recommendations made by a presidential advisory committee. According to FACA’s legislative history, these follow-up reports are intended to justify the investments in the advisory committees, provide accountability to the public and Congress, and require the President to state his response to the advisory committees’ recommendations. According to GSA regulations, the agency providing support to the advisory committee is responsible for preparing and transmitting the follow-up report to Congress. However, the regulations also state that the Secretariat (1) is responsible for ensuring that the follow-up reports are prepared by the agency supporting the presidential committee and (2) may solicit OMB and other appropriate organizations to help, if needed, to obtain agencies’ compliance. GSA identified 17 presidential advisory committee reports in the President’s annual reports for fiscal years 1995 and 1996 that required follow-up reports. We contacted the nine agencies that were responsible for the follow-up reports to determine whether the reports were prepared within the year and delivered to Congress. According to agency officials, follow-up reports were not required in 4 of the 17 cases because the advisory committees were erroneously listed as having issued a report with recommendations to the President. Follow-up reports were required in the remaining 13 cases but, according to agency officials, none were transmitted to Congress. These presidential advisory committees included, for example, the Glass Ceiling Commission, the President’s Cancer Panel, and the Federal Council on the Aging. Six of the nine committee management officers told us that they were unaware of the reporting requirement. Secretariat officials said that agencies are responsible for preparing and delivering the follow-up reports to Congress; therefore, they had not contacted the nine agencies to see whether the reports were prepared and delivered. Although it has no authority to stop a federal advisory committee from being formed or to terminate an existing committee, GSA is obligated to ensure that its FACA responsibilities are fulfilled completely and in a timely manner. These responsibilities are not insignificant. Congress imposed them to help ensure that federal advisory committees are needed, that the committees are properly managed, and that Congress is kept informed of the committees’ activities in a timely manner. Although we recognize that GSA believes it does not have the expertise or program knowledge to determine whether federal advisory committees are needed, it has the authority to ask agencies to provide justification for their recommendations. For example, GSA could follow up with agencies to determine why committees have not met before accepting agencies’ recommendations that the committees be continued. The Committee Management Secretariat intends to ask Congress to move to a later date the reporting deadline for the President’s annual report to Congress on the activities of federal advisory committees. The Secretariat’s view and proposed action do not relieve it of its responsibilities under FACA, and the Secretariat has not fulfilled those responsibilities. We recommend that the Administrator of GSA direct the Committee Management Secretariat to fully carry out the responsibilities assigned to it by FACA in a timely and accurate manner. In particular, the Secretariat should (1) consult with the agencies to ensure that the charters and justification letters for federal advisory committees contain the information required by law or regulation, (2) follow up with agencies when their annual reports contain information that raises questions about whether committees should be continued, and (3) ensure that agencies file the required follow-up reports to Congress on presidential advisory committee recommendations. The Secretariat should also make the necessary arrangements with agencies to submit its annual report to the President on time or follow through with its intention to ask Congress to move the reporting date. GSA and OMB provided comments on a draft of this report. In an April 27, 1998, letter (see app. II), the GSA Administrator said the Associate GSA Administrator for Governmentwide Policy will ensure that the Committee Management Secretariat takes immediate and appropriate action to implement our recommendation. The Administrator also said GSA will continue to improve its oversight of advisory committees by (1) proposing amendments to FACA to address some of the issues addressed by our report; (2) proposing new governmentwide regulations relating to FACA in June 1998; and (3) finalizing its new internet-based reporting system by the end of fiscal year 1998, which will allow agencies to electronically transmit data to GSA. On May 13, 1998, an OMB official responsible for advisory committee matters said that we had conducted a thorough review of GSA’s oversight responsibilities in meeting FACA’s procedural requirements, and that GSA appeared to have undertaken some corrective actions that will address many of our concerns and had scheduled other corrective actions during 1998. The OMB official said they would work with GSA to ensure the success of the GSA efforts. The GSA Administrator and the OMB official made additional comments, which we address here and as appropriate in appendix II for GSA. In general comments, GSA said that the draft report had not fully examined the extent to which GSA’s actions have achieved FACA’s principal stated outcomes of accountability for committee accomplishments and public access to committee deliberations and products. In addition, GSA said it has sought through its actions to strengthen the ability of other responsible officials at the agency level to perform more adequately their required FACA responsibilities. We recognize that GSA plays a broad role in overseeing advisory committee activities and are aware of its past initiatives, such as the creation of the interagency committee on FACA; the governmentwide training program for agency personnel who manage advisory committees; and the reduction of discretionary advisory committees under Executive Order 12838, which we mentioned in this report. But GSA also has a narrower, more focused role of carrying out its specific responsibilities that FACA and GSA regulations require. This latter role was the focus of this report. Nevertheless, we have cited some of GSA’s other activities in the text of this report. GSA also said that it is in the process of linking its new internet-based reporting system with an internal FACA database that it uses to track committees. By capturing data electronically, GSA expects that gaps in required data will be identified more easily and corrected contemporaneously. We believe such a system, if successful, should enable GSA to better ensure that its analysts have the full range of required information available to them as they perform GSA’s required FACA oversight responsibilities. GSA and OMB took exception to our finding that advisory committees were not comprehensively reviewed by GSA. GSA and OMB stated that advisory committees are reviewed annually by GSA through (1) the annual reporting process used by the agencies to certify the need for specific committees (which are the advisory committee annual reports) and (2) the annual process developed to implement Executive Order 12838 and OMB Circular A-135, which require the committee management plans. Under FACA, GSA has the responsibility to judge whether there is a convincing case for continuing a committee and cannot delegate this responsibility to the agencies under current law. The advisory committee annual reports were the basis for our analysis and conclusion that GSA was not independently assessing whether the committees should be continued or terminated. The committee management plans are used primarily to ensure that the number of discretionary advisory committees within the agencies does not exceed the ceiling established under Executive Order 12838 and to focus on the need for new, not existing, committees. A discussion of what should be included in the plans can be found in the Background section of this report. The management plans that we reviewed did not contain all of the information that would be needed for GSA to determine or question the continuing need for committees. For example, an explanation of why a committee had not met during the year is not required to be included in the management plan. Thus, we have not changed our conclusion. We are sending copies of this report to the Chairman, Senate Committee on Governmental Affairs; the Ranking Minority Member, House Subcommittee on Government Management, Information, and Technology; the Chairmen and Ranking Minority Members of the House Committee on Government Reform and Oversight and other interested congressional committees; the Administrator, GSA; the Director, OMB; and other interested parties. We will also make the report available to others on request. Major contributors to this report are listed in appendix IV. If you have any questions about this report, please call me on (202) 512-8676. Total costs(millions) The following are GAO’s comments on GSA’s April 27, 1998, letter. 1. Although agreeing that not every committee charter and justification letter that we reviewed included all of the required information, GSA suggested that the 36-percent error rate in the charters and the 38-percent error rate in the justification letters were misleading. Regarding committee charters, GSA said that a fairer assessment of GSA’s and agencies’ compliance with FACA would be to determine the error rate on the basis of the number of data items found not to be in compliance (85) divided by the total number of data items in the 203 charters we reviewed (2,030). This calculation provides an error rate of 4.2 percent. We do not believe that this would be a meaningful analysis because the charters or justification letters are the unit of analysis. That is, FACA and GSA regulations require each charter or justification letter to include a full set of specific data so that GSA analysts and others can properly and fully assess whether the committee is needed and whether it meets the FACA requirements for public participation and disclosure. GSA said that among the 85 missing items in the charters that we identified, 33 related to the period of time necessary for a committee to carry out its purpose. GSA said that 95 percent of all charters submitted to GSA for review are for advisory committees that are of a continuing nature, and that a default presumption of 2 years (based upon the sunset provision of FACA) is applied. GSA suggested that the default presumption mooted the absence of a stated period in the charters. It said that removal of these 33 data items from the 85 found to be in error would result in an overall error rate of 2.6 percent. We cannot ignore these 33 data items in an analysis of compliance with FACA requirements. Congress specifically required that charters include the time necessary for a committee to carry out its purpose, just as it required that charters include the committee’s termination date if it is less than 2 years after the committee is established. We believe that Congress included these two items, as well as the other eight items, in the charters to help keep Congress and the public informed about committee activities. Further, it seems to us that a benefit of including such information in the charters is to help agencies focus in the beginning of the process on the amount of time the committees should be taking to accomplish their purposes. 2. GSA suggested that two of the items required in the justification letter—explanations of why a committee is essential and why a committee’s function cannot be performed by other means—are contained in the two other submissions by the agencies. GSA said it is in the process of revising its regulations to eliminate redundant information and certifications among the advisory committee annual reports, consultation letters (which are the justification letters), and annual committee management plans, which are required by OMB and GSA to implement Executive Order 12838 and OMB Circular A-135. GSA suggested that as long as an item of information is contained in one of the required documents, it need not be in the others. It said that if these data items were removed from our analysis, the error rate, on the basis of the number of data elements found to be in error divided by the total number of data elements, would be 10.6 percent. As previously stated, we do not believe that this would be a meaningful analysis because it is the justification letters that should be the unit of analysis and not the individual data elements. Further, the two items in the justification letter to which GSA referred are only required to be in the justification letter and, therefore, agencies might not include them in the other two reports. For example, the two items do not need to be included in the advisory committee annual reports if the committee is new—during fiscal year 1996, there were 52 new committees. Additionally, only the justification letters are submitted when the committees are established; the other two reports are submitted on an annual basis that does not necessarily coincide with the justification letter submissions by the agencies. It would also appear to be more efficient for GSA analysts reviewing the charters to be able to rely on the justification letters for all needed information, rather than having to retrieve other documents that may or may not include relevant information. 3. We deleted the section in the draft report that is referred to in these comments. GSA has corrected the underreporting of advisory committee members and costs, which we brought to their attention during the course of our work. The underreporting was also acknowledged by GSA at the November 5, 1997, hearing on FACA before the House Subcommittee on Government Management, Information, and Technology. Jill P. Sayre, Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed whether the General Services Administration (GSA), through its Committee Management Secretariat, was carrying out its oversight responsibilities under the Federal Advisory Committee Act (FACA), focusing on whether GSA had: (1) ensured that federal advisory committees were established with complete charters and justification letters; (2) comprehensively reviewed each advisory committee annually; (3) submitted annual reports on advisory committees to the President in a timely manner; and (4) ensured that agencies prepared follow-up reports to Congress on recommendations by presidential advisory committees. GAO noted that: (1) compared to when GAO last reported in 1988, little had changed during the period it studied on how the Secretariat carried out its FACA responsibilities; (2) with 963 federal advisory committees, 57 sponsoring agencies, and submissions for each committee during fiscal year (FY) 1997, GSA's Committee Management Secretariat reviewed a large amount of paperwork for the purpose of ensuring that sponsoring agencies were: (a) following the requirements placed upon them by FACA; and (b) implementing GSA regulations; (3) the Secretariat conducted these reviews while performing other duties, such as providing formal training to federal employees who were directly involved with the operations of advisory committees and collaborating with an interagency committee on advisory committee management; (4) nevertheless, the Secretariat was responsible under FACA and GSA regulations for ensuring that those requirements were all fulfilled; (5) GSA, in consultation with the agencies, did not ensure that advisory committees were established with complete charters and justification letters as required by FACA or GSA regulations; (6) 36 percent of the charters and 38 percent of the letters GAO reviewed did not contain one or more items required by FACA or GSA regulations; (7) GSA did not independently assess, as it conducted the annual comprehensive reviews required by FACA, whether committees should be continued, merged, or terminated; (8) although GSA collected the FY 1996 annual reports, GSA officials said they accepted the data in them without further review; (9) GAO found this acceptance to be the norm even when information in a FY 1996 annual report should reasonably lead to further inquiries; (10) GSA did not submit most of its FACA annual reports to the President in time for him to meet the statutory reporting date to Congress nor did it ensure that FACA-required follow-up reports on presidential advisory committee recommendations were prepared for Congress; (11) Secretariat officials told GAO that agencies must take greater responsibility for preparing complete charters and justification letters and committee annual reports for sending follow-up reports to Congress; and (12) FACA has given the Secretariat responsibilities for ensuring that agencies satisfy the requirements for forming and operating advisory committees, and the Secretariat is not carrying out these responsibilities.
FERSA created TSP to provide options for retirement planning and encourage personal retirement saving among the federal workforce. Most federal workers are allowed to participate in TSP, which is available to federal and postal employees, including members of Congress and congressional employees and members of the uniformed services, and members of the judicial branch. TSP is structured to allow eligible federal employees to contribute a fixed percentage of their annual base pay or a flat amount, subject to Internal Revenue Service limits, into an individual tax-deferred account. Additionally, Federal Employees’ Retirement System (FERS) participants are eligible for automatic 1-percent contributions and limited matching contributions from the employing federal agency. TSP provides federal (and in most cases, state) income tax deferral on contributions and their related earnings, similar to those offered by many private sector 401(k)-type pension plans. As is typical in defined contribution (DC) plans, TSP allows participants to manage their accounts and conduct a variety of transactions similar to those available to 401(k) participants, including reallocating contributions or account balances, borrowing from the account, making withdrawals, or purchasing annuities. Administration of TSP falls under the purview of the Board, an agency established by Congress under FERSA. The Board is composed of five members appointed by the President, with the advice and consent of the Senate. They are authorized to appoint the executive director who hires additional personnel, and ETAC—a 15-member council that provides advice to the Board and the executive director on the investment policies and administration of TSP. policies for the investment and management of TSP, as well as administration of the plan. The executive director and Board staff are responsible for implementing the Board’s policies and managing the day- to-day operations of TSP, prescribing regulations to administer FERSA, and other duties. The Board members and the executive director serve as plan fiduciaries. FERSA has other investment policy provisions, such as who can exercise voting rights associated with the ownership of stocks held by TSP. For example, the Board and the executive director may not exercise voting rights associated with the ownership of TSP securities. 5 U.S.C. § 8473. sponsors have greater discretion in choosing which investment options to offer participants. In addition, Congress must amend FERSA to approve a change in TSP investment options offered to participants. TSP’s authorizing statute specifies the number and types of funds available to participants, and requires that some of these funds track indexes, which are broad, diversified market indicators. The Board may select the particular indices for the funds to follow as well as review the investment options and suggest additional funds. The Board has developed investment policies for each TSP fund. These policies, which the Board reaffirms quarterly, provide the rationale for selecting the fund’s investments. Table 1 shows FERSA requirements and Board policies regarding each fund and its underlying index. Members of Congress have introduced bills calling for new investment options to be added to TSP. In the past four sessions of Congress, a number of bills have been proposed to add investment options to TSP, including a corporate responsibility stock index fund, a precious metals investment fund, a real estate stock index fund, and a terror-free international investment option. In addition, Congress passed the Federal Retirement Reform Act of 2009, which among other things, authorized TSP to offer a service that would enable participants to invest in mutual funds outside TSP, if the Board determined that such a mutual fund window was in the best interests of participants. The law stipulated that the Board had to ensure that any expenses charged for use of the mutual fund window would be borne solely by the participants who used it. The Board has not implemented the mutual fund window. According to TSP officials, both the Board and ETAC were similarly split on whether to include a mutual fund window and the Board tabled the discussion of the mutual fund window to address more immediate issues, such as adding a Roth TSP option. They also noted that while TSP has not moved forward with adding a mutual fund window, it may at some future point. FERSA also requires the Board to defray reasonable expenses of administering TSP. TSP’s administrative expenses include management fees for each investment fund; the costs of operating and maintaining TSP’s recordkeeping system; the cost of providing participant services; and the printing and mailing of notices, statements, and publications. SRI—investment made on the basis of environmental, social, and corporate governance (ESG) criteria—is a global phenomenon and is growing in popularity in the United States. In 2006, the United Nations issued Principles for Responsible Investment that maintained a belief that ESG issues can affect the performance of investment portfolios and therefore must be given appropriate consideration by investors if they are to fulfill their fiduciary duty.investors commit to better align investors with broader societal goals while acting in the best long-term interests of their beneficiaries. Specifically, signatories agreed to By supporting the principles, institutional incorporate ESG issues into investment analysis and decision-making processes, be active owners and incorporate ESG issues into their ownership seek appropriate disclosure on ESG issues by the entities in which they invest, promote acceptance and implementation of these principles within the investment industry to affect the performance of investment, work together to enhance the effectiveness of the principles, and report on their activities and progress in implementing the principles. In 2012, there were more than 1,000 asset owners, investment managers, and professional service partners that had committed to these principles worldwide—136 of them in the United States, according to the United Nations’ website. Officials at TSP and the other public retirement plans that had considered adding an SRI option associated a number of common challenges with the implementation of SRI. While none of the plan officials that we contacted had plans that were identical to TSP in terms of its federal scope or participant demographics, many of them shared similar challenges and concerns with TSP. As shown in figure 3, participant demand, SRI screening criteria, and costs were the most common challenges identified by public retirement plans. TSP and most other public plan officials we contacted identified low participant demand for SRI as a challenge to adopting SRI. TSP officials told us that based on the results of their participant surveys and the experiences of ETAC there was little demand for an SRI fund among TSP participants. Specifically, they noted that the results of periodic participant surveys have consistently indicated that there was no overwhelming demand for any new investment options, including an SRI option. In addition, ETAC members told us that they were unaware of demand for SRI among TSP participants. They said that they would respond if demand ever presented itself. While consultants and fund managers that we contacted reported a growing demand for SRI in the United States, public plan officials that we spoke with generally reported low participant interest in SRI adoption. Officials at several plans noted that continued pressure and repeated demands from small vocal groups of participants in support of SRI had been a principal driver in the plans’ decision to have an SRI option. However, officials at several of these plans said that, while the SRI option did attract a small percentage of participants, overall participation in the SRI funds ranged from less than 0.5 percent (in one plan with 20 investment options) to about 10 percent (in a plan that offered 9 investment options). TSP and most other public plan officials we contacted identified the difficulty of finding broadly acceptable SRI criteria as a distinct challenge to adopting SRI. According to TSP officials, different interpretations of what social criteria to apply to an SRI fund could lead to the need to develop multiple funds to satisfy participants. Officials also noted that it would be hard to reach agreement on what values an SRI fund should endorse. Moreover, officials at most of the other plans we contacted said that the lack of a common definition of SRI and the selection of SRI criteria was challenging. For example, officials at one plan noted that social issues were difficult to incorporate into an investment approach because, while some basic social issues, such as child labor, imprisonment, and forced slavery, were generally acceptable screens, reaching broad consensus on other issues, such as labor laws, workers’ rights, weapons, guns, and tobacco, was more difficult. An official at another plan noted that it was incumbent upon participants to tell them what social policy they wished to pursue. Officials at the plans we contacted that considered selection of screening criteria to be a challenge overcame this challenge by either using an off-the shelf SRI fund, or relying on the expertise and experience of the SRI fund managers, or educating participants on why fund managers selected the investments they did. TSP and other public plan officials we contacted had varied opinions on the degree to which the costs associated with the creation and administration of an SRI index fund presented a challenge to adopting SRI. According to TSP officials, the costs to create a new index fund would be considerable. In addition, they said an SRI index fund would cost more because it requires additional screening and monitoring. Under TSP’s current cost structure, any costs associated with a new SRI index fund would be borne by all participants whether or not they chose to invest in the SRI index fund.had varied opinions on the degree to which these costs presented a challenge. While there would be certain upfront costs associated with adding an SRI fund, which could include member communication and manager selection, officials at several plans said that adding a new fund to its existing portfolio would not adversely affect administrative costs. According to some investment managers we contacted, the key factors affecting cost of any fund are (1) its asset size—the larger the asset base, the better the economies of scale and the lower the overall cost ratio— and (2) whether the fund’s investment strategy requires active management or passively tracking of a market index. Other public plan officials we contacted While TSP and some public plan officials we contacted asserted that their role as fiduciary was a challenge in that it precluded the adoption of SRI, officials at other public plans with an SRI option said there were no fiduciary concerns surrounding the inclusion of an SRI option in a DC plan. According to a 1990 memorandum sent from the TSP executive director to the Board, Congress considered and rejected the concept of social investment when creating TSP. The memorandum noted that the strict fiduciary provisions of the law, which require the Board to discharge its responsibilities solely in the interest of participants, excluded the possibility of social investing, and that any authorization outside the realm of interest of all participants would be inconsistent with the notion of employee ownership of TSP assets. Officials at all of the public plans that had not implemented an SRI option considered fiduciary issues a challenge to adopting SRI, while officials at other plans did not. For example, an official at one plan with an SRI option stated that fiduciary duty was not a challenge when adding an SRI fund to the investment options to a DC plan because participants have individual account ownership, are free to choose how they invest, and must assume responsibility for any risks associated with the underlying investments. One plan official noted that the fiduciary responsibility of a DC plan extends to exercising due diligence in the selection of a fund manager, providing appropriate participant communications about the fund, offering enough investment options, and acting in the best interest of the majority of participants. TSP and some public plan officials that we contacted asserted that investment decisions made on any basis other than the economic welfare of participants could present a challenge in that it would expose the plan to potential political interference. In its 2006 investment option review, a consulting firm hired by TSP eliminated SRI from consideration in part on the grounds that identifying screening criteria that all could agree to would be difficult to find and likely draw attention from opposing parties of interest. According to the 1990 memorandum from the TSP executive director, the laws that established the current TSP funds prevent the political manipulation of TSP funds, and officials told us that TSP has taken steps in the past to avoid political interference. Officials at the public plans we contacted had different views on the extent to which political interference was a challenge. For example, officials at some public plans that did not implement SRI identified political interference as one of the reasons they chose not to do so. On the other hand, officials at other public plans that had implemented SRI said that political interference was not a challenge. For example, officials at one public plan noted that the state’s legislative mandate to maximize returns and improve levels of risk prevented political interference. An official at another public plan that adopted SRI told us that although they had anticipated political interference by state officials following their decision to divest from alcohol or tobacco companies, it had not materialized. Officials at TSP and some other public plans identified SRI fund performance as a challenge to adopting an SRI fund. According to TSP officials, participants who allocated assets to an SRI fund instead of a standard fund that included all relevant companies would narrow the number of companies in which they were indirectly investing, thereby limiting their exposure to the performance of the broader, more diversified market. While officials at some public plans that we contacted believed that SRI funds had lower performance than other funds, other officials had mixed views on whether the performance of SRI funds was any more challenging than the performance of non-SRI funds. Officials at several plans, which had considered but not implemented SRI, cited SRI performance as a reason for not incorporating SRI in their plans. Officials at one plan said they would reconsider offering an SRI fund if it demonstrated better long-term performance than non-SRI funds. However, officials at several other plans that implemented SRI told us that the SRI fund produced comparable and sometimes better returns than other funds in their portfolio. Officials at one plan said the plan would terminate its working relationship with an external fund manager if its SRI investments did not perform as well as other funds. While TSP officials considered the lack of peer implementation of SRI as a challenge to adopting an SRI fund, officials at other public plans we contacted said that it was not a challenge. As part of its investment options review in 2006, a consulting firm advised TSP that SRI funds were not a common practice among TSP’s peers and identified this criterion as a reason for eliminating SRI from further consideration. According to TSP officials, the fact that similar plans had not adopted SRI was a challenge in that TSP had no precedent to follow. We found a number plans similar in asset size and membership to TSP that applied SRI principles through investment screening. Officials at several plans we contacted said that peer implementation of SRI did not factor in their decision to incorporate SRI into their investment strategy. Officials at most of the public plans we contacted had no restrictions regarding investment overlap between funds and thus did not view such overlap as a challenge to adopting SRI. According to TSP officials, the Board is permitted to suggest legislation to address any gaps in investment options as long as there is no evident overlap. In the past, for example, TSP proposed legislation authorizing the addition of the S Fund and the I Fund to provide participants with options for greater diversification of investments in the small capitalization and international markets. According to TSP officials, each of the current funds tracks different companies in different segments of the overall financial market without overlap, helping to reduce the risk of incurring large losses on a broader portfolio. Officials at other public plans, which did not face the same restrictions as TSP, said that overlap was not a consideration, and that certain amount of overlap with existing investments was both expected and accepted. Some officials noted that the purpose of SRI was to select companies that met certain criteria and provided an alternative investment choice. Officials at some of the nine public plans we contacted that offered an SRI option cited some short-term benefits associated with SRI but said that the long-term benefits were unknown. For example, officials at several plans noted that the greatest benefit of having an SRI option is giving participants broader investment choice and an opportunity to make a statement in the way they invest. Officials at other plans said that having an SRI option could serve as a recruiting tool for the plan in that it encouraged more eligible employees to join the plan. Regarding the long- term benefits of SRI, officials at two of the public plans stated that it was still too early to judge the benefit of SRI. As one plan official noted, responsible investment involves making investment decisions that are important to the long-term value and profitability of a company over time. When compared to the past performance of the TSP stock portfolio (the C, S, and I Funds), the addition of a hypothetical SRI index fund tracking the best-performing U.S.-based SRI stock index would not have both increased returns and lowered volatility in any allocation scenario we tested. Specifically, over the last 20 years, if TSP had included such an SRI index fund (SRI Fund) in its existing stock portfolio, it could have resulted in (1) lower returns and lower volatility, (2) lower returns and higher volatility, or (3) higher returns and higher volatility, based on our analysis of evenly distributed portfolio allocations containing the SRI Fund against the TSP stock portfolio alone (a C, S, and I Funds combination). For example, as shown in figure 4, adding the SRI Fund to the existing TSP stock funds (an SRI, C, S, and I Funds combination) would have resulted in lower returns and lower volatility; substituting the SRI Fund for the C Fund (an SRI, S, and I Funds combination) would have resulted in lower returns and higher volatility; and substituting the SRI Fund for the I Fund (an SRI, C, and S Funds combination) would have resulted in higher returns and higher volatility.based on past performance, this result does not guarantee or imply that Because this analysis is strictly the addition of an SRI index would have the same effect on future TSP stock fund portfolio performance. Overall, portfolio performance is directly tied to the individual fund performance, which varied by time period. A comparison of the underlying indices of these four funds shows that, while the SRI Fund had higher cumulative returns than the I Fund over the last 20 years, it had lower cumulative returns than all three of the TSP funds over the last 10 years. Figure 5 shows the funds’ annual and cumulative returns and highlights their performance during market cycles. The managers of the SRI index explained the difference in the index’s performance over the last 20 years in comparison with the Standard & Poor’s 500 Index (the C Fund index) was a result of having different sector weightings than the overall market to align with the fund’s SRI strategy. For example, they told us that, in the late 1990s, the index was relatively overweighted in technology, consumer, and finance stocks and underweighted in energy and utilities, resulting in higher performance in the “dot com” boom of the late 1990s and lower performance in the 2001 recession. Moreover, the SRI index also excludes companies involved in the production of military weapons, which may have contributed to lower returns over the past decade while the country has been at war. In addition to providing less return overall than the C Fund over the 20- year period, the inclusion of this SRI Fund would have resulted in overlap with the C Fund and not have provided a substantial opportunity for additional portfolio diversification. By law, holdings in TSP stock funds may not overlap. Fifty-seven percent of the companies included in the SRI Fund index, which includes large, mid, and small capitalization stocks, overlap with companies included in the C Fund index. In part, as a result of this overlap, the SRI Fund and the C Fund are highly correlated in their returns, and thus adding this SRI Fund would not provide a substantial opportunity for additional portfolio diversification. Portfolio diversification aims to reduce risk by investing in various financial instruments and markets so that market events will not affect all assets in the same way. Diversification opportunities exist if investments have independent price movement, and therefore, independent returns. The price movement between these two funds over the last 20 years was 1.94 percent independent, suggesting that the same external causes affected their returns to nearly the same degree. By contrast, over the same time period, the independence in price movement between the S Fund and C Fund was 17.27 percent, and between the I Fund and C Fund was 42.19 percent. Looking more broadly at SRI mutual funds, the most common form of SRI in the United States, we found that the comparative performance of SRI and non-SRI mutual funds over the last 15 years varied by asset class. While TSP participants cannot currently invest in mutual funds through TSP, the Board is authorized to offer a mutual fund window if it determines that it is in the best interests of participants. Specifically, our analysis of institutional-grade mutual funds over the last 5, 10, and 15 years (dating back from December 2011) found that SRI bond mutual funds had better risk-adjusted performance than their non-SRI counterparts. In contrast, SRI stock funds and SRI balanced funds— which hold bonds and stocks—had worse risk-adjusted performance than their non-SRI counterparts over these time periods. Because this analysis is strictly based on past performance, these results do not guarantee or imply that these asset classes would perform similarly in the future. After controlling for fund size and investment strategies (other than SRI approaches), we found that the performance gap between the SRI and non-SRI mutual funds narrowed significantly for stock funds but not for balanced funds. Moreover, our regression estimates showed that SRI stock mutual funds performed better than their non-SRI stock counterparts in the 5- and 15-year timeframes, after controlling for differences in asset size, share class, and investment strategies. (See appendix I for additional information on the regression analyses.) In fiscal year 2010, the costs of SRI institutional grade mutual funds were similar to their non-SRI counterparts. It is important to note that our cost analysis included only the most recent year of data available (fiscal year 2010) for three share classes of institutional grade mutual funds, and it did not look at all SRI product types such as variable annuities or exchange traded funds, which may have had higher costs. In addition, fiscal year 2010 cost data are not indicative of past or future costs. As shown in figure 7, there was considerable overlap in the costs associated with these funds, as measured by their annual net expense ratio—the actual percentage of assets deducted each fiscal year for fund expenses. While non-SRI mutual funds had a broader range of costs than SRI mutual funds, the vast majority of SRI and non-SRI funds reported expense ratios from 0.12 to 1.81 percent. On average, the reported expense ratios for SRI mutual funds were 0.2 percentage points higher than non-SRI mutual funds. When asset size and investment strategy were taken into account, SRI mutual fund cost ratios were estimated to be only 0.06 percentage points higher than non-SRI mutual fund cost ratios. For additional details on our regression analysis on cost ratios, see appendix I. Adoption of an SRI index fund would present challenges for TSP. Currently, the law limits the types of funds that TSP can offer, prohibits overlap among existing funds, and charges TSP to keep its costs low. First, TSP would have difficulty finding an SRI index fund that did not overlap with the existing TSP funds, limiting opportunities for additional portfolio diversification. However, officials at other DC plans, which do not face the same restrictions as TSP, said that a certain amount of overlap with SRI and other investment options was acceptable and the purpose of SRI was to provide an alternative investment choice. Second, TSP would have difficulty selecting SRI screening criteria that all participants and the Congress would find acceptable. While challenging, a number of plans have a long history of SRI in their plans. Finally, under TSP’s current structure, the costs of adding a new fund would be distributed among all participants regardless of whether they participated in that fund. We note that the Board has the authority to open a mutual fund window for participants to invest in mutual funds managed outside TSP. If the Board decides to act on this authority and allow the mutual fund window, participants seeking other forms of investment, including SRI, could invest in mutual funds and would bear the costs associated with this investment. We provided a copy of this draft report to the Federal Retirement Thrift Investment Board, the Department of Labor, and the Department of the Treasury for review and comment. None of the agencies provided formal comments. The Department of Labor provided technical comments, which we incorporated in the report, as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to relevant congressional committees and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report is listed in appendix IV. To determine the challenges associated with socially responsible investment (SRI), we reviewed relevant federal laws, regulations, and literature. For example, we reviewed the Federal Employees’ Retirement System Act of 1986, the Federal Retirement Reform Act of 2009, and the 2006 United Nations’ Principles for Responsible Investment. We also interviewed officials from the Thrift Savings Plan (TSP), the Employee Thrift Advisory Council, investment management and consultant firms, and 15 selected public pension plans. Our nonrepresentative sample of pension plans included 9 domestic defined contribution (DC) and defined benefit (DB) plans that incorporated SRI, and 6 plans that considered but did not adopt an SRI component. To identify our sample, we contacted plans that were signatories of the United Nations’ Principles for Responsible Investment and employed a snowball sampling technique based on recommendations of interviewees. We analyzed interview responses of pension plan officials and other SRI experts on the challenges and benefits associated with SRI, and how these experiences might affect TSP. To determine how the addition of an SRI index fund to a TSP stock portfolio would have affected past TSP stock portfolio performance, we identified the best performing U.S.-based SRI index and assessed its potential impact on the TSP stock portfolio based on historical performance data of the three TSP stock fund underlying indices. To identify the best performing U.S.- based SRI stock index (SRI Fund) we (1) identified all U.S.-based SRI stock indices with at least a 10-year history, and (2) selected the index with the best 10-year Sharpe ratio (dating back from December 2011). To determine how the SRI Fund would have affected TSP stock portfolio performance from 1992 to 2011, we analyzed monthly total return data for the SRI Fund and the underlying indices of the three TSP stock funds provided by Morningstar, Inc.—a leading independent financial market research firm. We used these data to analyze changes in annual returns and volatility, in a manner similar to past analysis conducted by TSP when considering whether to add funds to the TSP portfolio. An important element of any performance statistic is the unit of time measurement. Our analysis measures returns on an annual basis, and measures risk based on the variation in year-to-year returns. Using a different unit of time, such as a month or even a multi-year period, could give a different picture of the risk/reward tradeoff. We calculated the compound rates of return and standard deviation based on annual rates of return from 1992 to 2011 for an annually rebalanced, evenly distributed portfolio of the three existing TSP stock fund indices (a distribution of 33 percent, 33 percent, and 33 percent). We then calculated the change in compound rates of return and standard deviation of annual returns for the following evenly distributed portfolios: a four-way combination (25 percent, 25 percent, 25 percent, and 25 percent) of the SRI Fund and the three TSP stock funds indices, all of the three-way combinations (33 percent, 33 percent, and 33 percent) of the SRI Fund with two of the TSP stock fund indices, all two-way combinations (50 percent and 50 percent) of the SRI Fund with one of the TSP stock fund indices, and the SRI Fund alone (100 percent). Another decision in any performance assessment is whether to do the analysis on a time-weighted or a dollar-weighted basis. A time-weighted basis gives equal weight to each unit of time; thus, the annual rate of return in 1992 gets just as much weight in the analysis as the annual rate of return in 2011. A dollar-weighted basis gives greater weight to the periods when more money is at stake. For example, for a TSP participant who made regular contributions to the plan during the 1992 to 2011 period, the overall rate of return would be more influenced by particular performance in the later years, when more contributions are at stake, than in the earlier years. We used a time-weighted basis for our analysis, in order to focus on investment performance itself, rather than on the particular economic consequences in the time period under study. To further assess the performance of the SRI Fund, we compared annual rates of return and compound cumulative rates of return for the three TSP stock fund indices over various time periods. Specifically, we reviewed performance over the 20-year period (1992 to 2011), the 10-year period (2002 to 2011), and periods of market weakness. Because this analysis is strictly based on past performance, this result does not guarantee or imply that the addition of an SRI index would have the same effect on future TSP stock fund portfolio performance. In addition, we analyzed the overlap of holdings of the SRI Fund and the C Fund as of April 2012. To analyze the diversification potential of the SRI Fund for the TSP stock portfolio, we analyzed the correlation coefficient, common variance, and independence of price movement between the SRI Fund and the C Fund over the last 20 years. To determine how the performance and cost of SRI mutual funds compare with those of non-SRI mutual funds, we compared performance over the past 15 years (1997 to 2011)— the longest time period for which data were available—and costs as of fiscal year 2010 provided by Morningstar. To identify the universe of SRI mutual funds, we included mutual funds from Morningstar considered to be SRI mutual funds based on ethical screen employed and data on SRI mutual funds maintained by US SIF. analyze performance and cost of SRI and non-SRI mutual funds active as of December 2011, we focused our analysis exclusively on three institutional grade share classes—institutional, front-load and no-load—of U.S. domiciled open-end mutual funds, which experts identified as the most common form of SRI funds. We did not examine other forms of SRI, such as exchange traded funds, hedge funds, or variable annuities. Because this analysis is strictly based on past performance, these results do not guarantee or imply that these asset classes would perform similarly in the future. Performance statistics include measures of risk-adjusted returns over 5-, 10-, and 15-year time periods dating back from December 2011. US SIF was formerly the Social Investment Forum (SIF). US SIF is a U.S. membership association for professionals, firms, institutions and organizations engaged in sustainable and responsible investing. Some mutual funds had more recent inception dates, thus limiting the number of funds in longer-term performance comparisons. Risk-adjusted return statistics include the Sharpe and Sortino ratios. Cost measures include fiscal year 2010 annual report net-expense ratios. To investigate why SRI and non-SRI mutual funds differed in performance, we ran regressions with and without controls for fund size, share class, and investment strategy not inherently related to SRI. We used the risk-adjusted performance measures, Sharpe and Sortino ratios, as the dependent variables. Table 2 summarizes the results of 24 regressions for U.S. stock funds. The numbers in the columns labeled “Outcome Variable 1” and “Outcome Variable 2” are coefficient estimates on a flag indicating that a fund was non-SRI. As shown above, the Sharpe ratio served as the outcome variable for the first set of regressions (second column of the table). The Sortino ratio served as the outcome variable for the second set of regressions (third column of the table). The last column shows the average impact on the non-SRI flag coefficient of inclusion of the control variables for the row. For each outcome variable, inclusion of the control variables generally reduced the estimated performance premium of non-SRI funds versus SRI funds. The fund size and strategy variable sets both had substantial impacts on the estimated difference in SRI and non-SRI fund performance. The rows of the table show regression results for the 5-, 10-, and 15-year time frames. The rows indicating no control variables included only the non-SRI flag. The rows indicating “with fund size” included fund size, along with the SRI variable as explanatory variables. The row indicating “with strategy categories” include controls for whether a fund is actively managed or passively tracks an index, the broad investment category of a fund based on portfolio statistics and composition (e.g., natural resources, real estate, or financial), the more narrowly defined institutional investment category of a fund based on portfolio statistics and composition (e.g., materials, domestic energy, technology, or utilities), and the market capitalization and type of stock (value, blend, and growth). The row indicating “with all controls” provides results for regressions with share class as an explanatory variable in addition to the fund size and strategy variables. Table 3 shows results from 24 regressions for balanced funds. The methodology for these regressions was the same as that used for the regressions in table 2. Accounting for covariates did not have a consistent impact on the estimated difference in performance between SRI and non- SRI funds for these funds, with the addition of control variables to the regressions sometimes increasing the estimated difference in performance and sometimes decreasing it. Table 4 shows results for 24 regressions for bond funds. The methodology for these regressions was the same as that used for the regressions in table 2 except that one field (equity style box) was not included as an explanatory variable because it was not populated for 90 percent of these funds. Accounting for covariates generally decreased the estimated difference in performance between the SRI and non-SRI funds. To investigate disparities between SRI and non-SRI mutual fund costs, we ran regressions that controlled for fund size, share class, and investment strategy. The coefficient for a flag indicating SRI status is reported for four regressions in table 5. The regression reported in the third column used fund size categories along with the SRI variable as explanatory variables. The strategy variables used in the regressions reported in the fourth and fifth columns were the same as those used for the performance regressions reported above. Once the fund size and investment strategy variables are taken into account, the estimated difference in cost between SRI and non-SRI funds falls to 0.06 percent. We assessed the reliability of the quantitative data used in this engagement provided by Morningstar by reviewing related documentation, interviewing knowledgeable officials, reviewing related internal controls, comparing to published data, and tracing a selection of data to source documentation. Based on this evaluation, we determined these data were reliable for the purposes of this report. We supplemented our quantitative analysis with qualitative data obtained from our interviews. Appendix II: Variation of Annual Compound Rates of Return and Standard Deviation among Evenly Distributed Allocations of the Best Performing SRI Stock Index and Existing TSP Stock Index Funds, 1992 through 2011 Allocations Existing TSP Funds (C, S, I) (baseline) σ= the Greek letter commonly used to denote standard deviation rt= expected return on the series or portfolio = the arithmetic mean of the return series r n= the number of periods  Sortino ratio. The Sortino ratio is a risk adjusted return ratio that considers excess return over a designated target return and the risk of not achieving that target return. Excess return is defined as the series’ return less the target return; risk is considered to be the semi- standard deviation below the target return. The Sortino ratio therefore tells you how well you are being compensated by a series for each unit of shortfall risk you are incurring. The formula for the Sortino ratio: SD= the target semi-standard deviation of the return series in question over the period in question. This is the square root of the target semi-variance, with T as the target return the arithmetic average return of the return series in question over the period in question The following tables provide additional information on risk-adjusted performance of SRI and non-SRI mutual funds by asset class. In addition to the individual named above, Kimberley Granger, Assistant Director; Jonathan S. McMurray, Analyst-in-Charge; James Bennett; and Sarah Kaczmarek made significant contributions to this report. Kenneth Stockbridge, Roger Thomas, and Jack Wang also made key contributions.
Socially responsible investment—investment made on the basis of environmental, social, religious, or corporate governance criteria— in U.S.-based mutual funds exceeded $300 billion in value in 2010. TSP—a $308 billion retirement plan with more than 4.5 million participants—currently offers five distinct low-cost investment options, and is authorized to offer a service that enables direct participant investment in mutual funds outside TSP. GAO was asked to consider the value of adding an SRI option to TSP. GAO examined: (1) What challenges might TSP face in adopting an SRI option? (2) How would the addition of an index fund tracking an SRI index have affected past TSP stock portfolio performance? (3) How do the performance and costs of SRI mutual funds compare to those of non-SRI mutual funds? To analyze the challenges surrounding SRI, GAO interviewed federal officials, SRI experts, and representatives of public retirement plans that had considered SRI adoption. To examine the impact of adding an SRI fund to the existing TSP funds, GAO analyzed monthly benchmark return data. To examine mutual fund performance trends and costs, GAO analyzed historical summary data on US-based mutual funds. GAO provided a copy of this draft report to the Board, the Department of Labor, and the Department of the Treasury for review and comment. None of the agencies provided formal comments on the report. Officials at the Thrift Savings Plan (TSP) and the other public retirement plans that had considered socially responsible investment (SRI) associated a number of common challenges with SRI adoption. While none of these plans were identical to TSP in scope or demographics, many plan officials shared similar challenges and concerns with TSP. For example, they identified participant demand, SRI screening criteria, and costs as the most common challenges. Officials at public retirement plans that had adopted SRI cited some short-term benefits of SRI, such as providing participants an opportunity to invest in accordance with their values, but said that the long-term benefits were unknown. When compared to the past performance of the TSP stock portfolio, the addition of a hypothetical SRI index fund tracking the best-performing U.S.-based SRI stock index would not have both increased returns and lowered volatility in any allocation scenario that GAO tested. Specifically, over the last 20 years, if TSP had included such an SRI index fund in its existing stock portfolio, it could have resulted in (1) lower returns and lower volatility, (2) lower returns and higher volatility, or (3) higher returns and higher volatility, based on GAO’s analysis of evenly distributed portfolio allocations. The managers of the SRI index explained the difference in the index’s performance over the last 20 years was a result of having different sector weightings than the overall market to align with the fund’s SRI strategy. Moreover, the addition of this SRI fund would have resulted in overlap with the TSP stock portfolio, and would not have provided a substantial opportunity for additional portfolio diversification. Looking more broadly at SRI mutual funds—the most common form of SRI in the United States—GAO found the comparative performance of SRI and non-SRI mutual funds to vary by asset class while costs were nearly the same. Regarding performance, SRI bond mutual funds had better risk-adjusted performance than their non-SRI counterparts over the last 15 years, while SRI stock and balanced funds did not. However, after controlling for various factors such as fund size, SRI stock mutual funds had better estimated performance as well. Regarding costs, in fiscal year 2010, the costs of SRI institutional grade mutual funds were similar to their non-SRI counterparts. Although TSP participants cannot currently invest in mutual funds through TSP, the Federal Retirement Thrift Investment Board (Board) is authorized to offer a mutual fund window if it determines that it is in the best interests of participants. This report contains no recommendations.
In the 1990s, Congress and the executive branch laid out a statutory and management framework that provides the foundation for strengthening government performance and accountability, with GPRA as its centerpiece. GPRA is a continuation of more than 50 years of efforts to link resources with results. These management reforms of the past—the Budget and Accounting Procedures Act of 1950, Planning-Programming-Budgeting- System, Management by Objectives, and Zero-Base-Budgeting—failed partly because they did not prove to be relevant to budget decision makers in the executive branch or Congress. GPRA melds the best features, and avoids the worst, of its predecessors. Unlike most of its predecessors, GPRA is grounded in statute, giving Congress an oversight stake in the success of this initiative. Moreover, unlike these other initiatives, GPRA explicitly sought to promote a connection between performance plans and budgets. The expectation was that agency goals and measures would be taken more seriously if they were perceived to be used and useful in the resource allocation process. GPRA has now entered its 10th year, has survived two successive administrations, and has periodically formed the basis for congressional oversight. The current administration has implemented several efforts to more completely integrate information about cost and performance during its annual budget review process. The President’s Management Agenda (PMA), by focusing on 14 targeted areas—5 mutually reinforcing governmentwide goals and 9 program initiatives—seeks to improve the management and performance of the federal government. Budget and performance integration is one of the administration's five priorities in the PMA, while PART is the central element in the performance budgeting piece of the PMA. To track both agencies’ progress towards and current status in achieving each of the five PMA initiatives, OMB implemented an Executive Branch Management scorecard. We have found that the value of the scorecard, with its red, yellow, and green “stoplight” grading system, is not, in fact, the scoring, but the degree to which scores lead to a sustained focus and demonstrable improvements. The Scorecard criteria for the budget and performance integration initiative include elements such as the integration of budget and planning staff, an integrated performance plan and budget grounded in outcome goals and aligned with the staff and resources necessary to achieve program targets, and whether the agency can document program effectiveness. While the scorecard focuses on the capacity of agency management to develop an infrastructure for performance budgeting, OMB’s PART is meant to more explicitly infuse performance information into the budget formulation process at a level at which funding decisions are made. PART was applied during the fiscal year 2004 budget cycle to 234 “programs.” OMB rated programs as “effective,” “moderately effective,” “adequate,” or “ineffective” based on program design, strategic planning, management, and results. If OMB deemed a program’s performance information and/or performance measures insufficient or inadequate, a fifth rating of “results not demonstrated” was given. According to OMB, the assessments were a factor in funding decisions for the President’s fiscal year 2004 budget request. In an unprecedented move, OMB has made the assessment tool, rating results, and supporting materials available on its Web site. OMB has said that it will apply PART to another 20 percent of programs and reassess the fiscal year 2004 programs in developing the President’s fiscal year 2005 budget request. Moreover, it has announced its intention to use agencies’ updated strategic plans, which were due in March 2003, as templates for future budget requests. During GPRA’s first 10 years, the federal government has managed, for the first time, to generate a systematic, governmentwide effort to develop strategic and performance plans covering the essential functions of government. While clearly a work in progress, the formulation of performance goals and indicators has laid the foundation for a more fundamental transformation in how the government does business. As we begin this next decade of performance management at the federal level, we may have reached a crossroad. Building on agencies’ hard-won achievements in developing plans and measures, the government now faces the challenge of promoting the use of that information in budget decision making, program improvement, and agency management. Promoting a more explicit use of performance information in decision making promises significant rewards, but it will not be easy, and in fact, is fraught with risks. Decision makers need a road map that defines what successful performance budgeting would look like, and that identifies the key elements and potential pitfalls on the critical path to success. In a sense, what is needed is a strategic plan for performance budgeting. In the remainder of this testimony I will discuss some of these key elements and risks, including a definition and expectations for performance budgeting itself; the underpinnings of credible performance information and measures; addressing the needs of various potential users; the alignment of performance planning with budget and financial management structures; elevating budget trade-offs; and the continuing role of congressional oversight. Performance-based budgeting can help enhance the government’s capacity to assess competing claims in the budget by arming budgetary decision makers with better information on the results of both individual programs as well as entire portfolios of tools and programs addressing common performance outcomes. Although not the answer to vexing resource trade- offs involving political choice, performance information could help policymakers address a number of questions, such as whether programs are contributing to their stated goals, well-coordinated with related initiatives at the federal level or elsewhere, and targeted to those most in need of services or benefits. It can also provide information on what outcomes are being achieved, whether resource investments have benefits that exceed their costs, and whether program managers have the requisite capacities to achieve promised results. Although performance budgeting can reasonably be expected to change the nature of resource debates, it is equally important to understand what it cannot do. Previous management reforms have been doomed by inflated and unrealistic expectations, so it is useful to be clear about current goals. Performance budgeting cannot replace the budget process as it currently exists, but it can help shift the focus of budgetary debates and oversight activities by changing the agenda of questions asked in these processes. Budgeting is essentially the allocation of resources; it inherently involves setting priorities. In its broadest sense, the budget debate is the place where competing claims and claimants come together to decide how much of the government’s scarce resources will be allocated across many compelling national purposes. Performance information can make a valuable contribution to this debate, but it is only one factor and it cannot substitute for difficult political choices. There will always be a debate about the appropriate role for the federal government and the need for various federal programs and policies—and performance information cannot settle that debate. It can, however, help move the debate to a more informed plane, one in which the focus is on competing claims and priorities. In fact, it raises the stakes by shifting the focus to what really matters—lives saved, children fed, successful transitions to self- sufficiency, and individuals lifted out of poverty. Under performance budgeting, people should not expect that good results will always be rewarded through the budget process while poor results will always have negative funding implications. Viewing performance budgeting as a mechanistic arrangement—a specific level of performance in exchange for a certain amount of funding—or in punitive terms—produce results or risk funding reductions—is not useful. Such mechanistic relationships cannot be sustained. Rather than increase accountability, these approaches might instead devalue the process by favoring managers who meet expectations by aiming low. The determination of priorities is a function of competing values and interests that may be informed by performance information but also reflects such factors as equity, unmet needs, and the appropriate role of the federal government in addressing these needs. OMB’s PART initiative illustrated that improving program design and management may be a necessary investment in some cases. For example, the Department of Energy’s Environmental Management (Cleanup) program was rated “ineffective” under PART. The administration recommended additional funds for the program compared to fiscal year 2002 funding and reported that the Department will continue to work with federal and state regulators to develop revised cleanup plans. The Department of State’s Refugee Admissions to the U.S. program was rated “adequate” under PART; in addition to recommending increased funding, the administration will review the relationship between this program and the Office of Refugee Resettlement at the Department of Health and Human Services. For its part, State will continue its ongoing efforts to improve strategic planning to ensure that goals are measurable and mission-related. Ultimately, performance budgeting seeks to increase decision makers’ understanding of the links between requested resources and expected performance outcomes. Such integration is critical to sustain and institutionalize performance management reforms. As the major annual process in the federal government where programs and activities come up for regular review and reexamination, the budget process itself benefits as well if the result of integration is better, more reliable performance information. For performance data to more fully inform resource allocations, decision makers must feel comfortable with the appropriateness and accuracy of the outcome information and measures presented—that is, they are comprehensive and valid indicators of a program’s outcomes. Decision makers likely will not use performance information that they do not perceive to be credible, reliable, and reflective of a consensus about performance goals among a community of interested parties. Moreover, decisions might be guided by misleading or incomplete information, which ultimately could further discourage the use of this information in resource allocation decisions. Accordingly, the quality and credibility of outcome-based performance information and the ability of federal agencies to produce such evaluations of their programs’ effectiveness are key to the success of performance- based budgeting. However, in the fiscal year 2004 President’s budget request, OMB rated 50 percent of PART programs as “results not demonstrated” because they found that the programs did not have adequate performance goals and/or data to gauge program performance were not available. Likewise, GAO’s work has noted limitations in the quality of agency performance and evaluation information and in agency capacity to produce rigorous evaluations of program effectiveness. We have previously reported that agencies have had difficulty assessing many program outcomes that are not quickly achieved or readily observed and contributions to outcomes that are only partly influenced by federal funds. Furthermore, our work has shown that few agencies deployed the rigorous research methods required to attribute changes in underlying outcomes to program activities. If budget decisions are to be based in part on performance data, the integrity, credibility, and quality of these data and related analyses become more important. Developing and reporting on credible information on outcomes achieved through federal programs remains a work in progress. For example, we previously reported that only five of the 24 Chief Financial Officers (CFO) Act agencies’ fiscal year 2000 performance reports included assessments of the completeness and reliability of their performance data in their transmittal letters. Further, although concerns about the quality of performance data were identified by the inspectors general as either major management challenges or included in the discussion of other challenges for 11 of the 24 agencies, none of the agencies identified any material inadequacies with their performance data in their performance reports. Moreover, reliable cost information is also important. Unfortunately, as we recently reported, most agencies’ financial management systems are not yet able to routinely produce information on the full cost of programs and projects as required by the Federal Financial Management Improvement Act of 1996 (FFMIA). The ultimate objective of FFMIA is to ensure that agency financial management systems routinely provide reliable, useful, and timely financial information, not just at year-end or for financial statements, so that government leaders will be better positioned to invest resources, reduce costs, oversee programs, and hold agency managers accountable for the way they run programs. To achieve the financial management improvements envisioned by the CFO Act, FFMIA, and more recently, the PMA, agencies need to modernize their financial management systems to generate reliable, useful, and timely financial information throughout the year and at year-end. Meeting the requirements of FFMIA presents long- standing, significant challenges that will be attained only through time, investment, and sustained emphasis on correcting deficiencies in federal financial management systems. In the past, we have also noted limitations in agency capacity to produce high-quality evaluations of program effectiveness. Through GPRA reporting, agencies have increased the information available on program results. However, some program outcomes are not quickly achieved or readily observed, so agencies have drawn on systematic evaluation studies to supplement their performance data collection and better understand the reasons behind program performance. However, in survey based on 1995 data covering 23 departments and independent agencies, we found that agencies were devoting variable but relatively small amounts of resources to evaluating program results. Many program evaluation offices were small, had other responsibilities, and produced only a few effectiveness studies annually. Moreover, systematic program evaluations—and units responsible for producing them—had been concentrated in only a few agencies. Although many federal programs attempt to influence complex systems or events outside the immediate control of government, we have expressed continued concern that many agencies lack the capacity to undertake the program evaluations that are often needed to assess a federal program’s contributions to results where other influences may be at work. In addition to information on the outcomes, impact evaluations using scientific research methods are needed to isolate a particular program’s contribution to those outcomes. Yet in our survey, we found that the most commonly reported study design was judgmental assessment of program effects. These judgmental assessments, one-time surveys, and simple before-and-after studies accounted for 40 percent of the research methods used in agencies’ evaluation studies conducted during the period we studied. There are inherent challenges affecting agencies’ capacity to conduct evaluations of program effectiveness. For example, many agency programs are designed to be one part of a broader effort, working alongside other federal, state, local, nonprofit, and private initiatives to promote particular outcomes. Although information on the outcomes associated with a particular program may be collected, it is often difficult to isolate a particular program’s contribution to those outcomes. Additionally, where federal program responsibility has devolved to the states, federal agencies’ ability to influence program outcomes diminishes, while at the same time, their dependence on states and others for data with which to evaluate programs grows. In past reports, we have identified several promising ways agencies can potentially maximize their evaluation capacity. For example, careful targeting of federal evaluation resources on key policy or performance questions and leveraging federal and nonfederal resources show promise for addressing key questions about program results. Other ways agencies might leverage their current evaluation resources include adapting existing information systems to yield data on program results, drawing on the findings of a wide array of evaluations and audits, making multiple use of an evaluation’s findings, mining existing databases, and collaborating with state and local program partners to develop mutually useful performance data. Our work has also shown that advance coordination of evaluation activities conducted by program partners is necessary to help ensure that the results of diverse evaluation activities can be synthesized at the national level. Improvements in the quality of performance data and the capacity of federal agencies to perform program evaluations will require sustained commitment and investment of resources, but over the longer term, failing to discover and correct performance problems can be much more costly. More importantly budgetary investments need to be viewed as part of a broader initiative to improve the accountability and management capacity of federal agencies and programs. Improving the supply of performance information is in and of itself insufficient to sustain performance management and achieve real improvements in management and program results. Rather, it needs to be accompanied by a demand for that information by decision makers and managers alike. The history of performance budgeting has shown that the supply of information will wither if it is perceived to have failed to affect decision making. Accordingly, PART may complement GPRA’s focus on increasing the supply of credible performance information by promoting the demand for this information in the budget decision making process. Successful use of performance information in budgeting should not be defined only by the impact on funding levels in presidential budget requests and the congressional budget process. Rather, resource allocation decisions are made at various other stages in the budget process, such as agency internal budget formulation and execution and in the congressional oversight and reauthorization process. If agency program managers perceive that program performance and evaluation data will be used to make resource decisions throughout the resource allocation process and can help them make better use of these resources, agencies may make greater investments in improving their capacity to produce and procure quality information. For example, in our work at the Administration on Children and Families, we describe three general ways in which resource allocation decisions at the programmatic level are influenced by performance: (1) training and technical assistance money is often allocated based on needs and grantee performance, (2) partnerships and collaboration help the agency work with grantees towards common goals and further the administration’s agenda, and (3) organizing and allocating staff around agency goals allow employees to link their day-to-day activities to longer-term results and outcomes. It is important to note that these and other examples from our work at the Veterans Health Administration and the Nuclear Regulatory Commission affect postappropriations resource decisions, that is, the stage where programs are being implemented during what is generally referred to as budget execution. Sustaining a focus on performance budgeting in the federal government is predicated upon aligning performance goals with all key management activities—budgeting, financial management, human capital management, capital acquisition, and information technology management. The closer the linkage between an agency’s performance goals, its budget presentation, and its net cost statement, the greater the reinforcement of performance management throughout the agency and the greater the reliability of budgetary and financial data associated with performance plans. Clearer and closer association between expected performance and budgetary requests can more explicitly inform budget discussions and focus them—both in Congress and in agencies—on expected results, rather than on inputs or transactions solely. Throughout government, as figure 1 shows, there exists a general lack of integration among budget, performance, and financial reporting structures. Moreover, these structures can vary considerably across the departments and agencies of the federal government. For example, the current budget account structure was not created as a single integrated framework, but developed over time to reflect the many roles it has been asked to play and to address the diverse needs of its many users. It reflects a variety of different orientations which for the most part do not reflect agency performance goals or objectives. Agency budget accounts, for instance, can be organized by items of expense, organizational unit, program, or a combination of these categories. The general lack of integration between these structures can hamper the ability of agencies to establish and demonstrate the linkage between budget decisions and performance goals. While special analyses can help illustrate these linkages, such efforts are often burdensome and awkward. A systematic capacity to crosswalk among these disparate structures can help encourage a more seamless integration of resources with results. Better matching of full costs associated with performance goals helps increase decision makers’ understanding of the links between requested resources and expected performance outcomes. This will eventually require linkages between performance planning and budget structures (to highlight how requested resources would contribute to agency goals) as well as linkages between performance plans and financial reporting structures (to highlight the costs of achieving agency goals). Ultimately, over the longer term, this integration may require changing the structures themselves to harmonize their orientations. Our work indicates that progress has been made. Agencies are developing approaches to better link performance plans with budget presentations and financial reporting. They have made progress in both in establishing linkages between performance plans and budget requests and in translating those linkages into budgetary terms by clearly allocating funding from the budget’s program activities to performance goals. For example, table 1 and figure 2 show the approaches used by the Department of Housing and Urban Development (HUD) in its last three performance plans. In table 1, for fiscal years 2000 and 2001, HUD used summary charts to array its requested resources by general goal but progressed from portraying this linkage with an “x” in fiscal year 2000 to using funding estimates derived from its budget request in fiscal year 2001. Figure 2 shows the fiscal year 2002 plan in which HUD removed the summary charts and instead directly portrayed the linkages in the body of the plan. We have also seen progress in agencies’ initial efforts to link annual performance reporting with annual audited financial statements. For example, for fiscal year 2000, 13 of the 24 agencies covered by the CFO Act, compared to 10 in fiscal year 1999, reported net costs in their audited annual financial statements using a structure that was based on their performance planning structure. Better understanding the full costs associated with program outcomes is another important but underdeveloped element of performance budgeting. This entails a broader effort to more fully measure the indirect and accrued costs of federal programs. The administration has proposed that agencies be charged for the government’s full share of the accruing costs of all pension and retiree health benefits for their employees as those benefits are earned. Such a proposal could help better reflect the full costs accrued in a given year by federal programs. Recognizing long-term costs is also important to understanding the future sustainability and flexibility of the government’s fiscal position. For activities such as environmental cleanup costs, the government’s commitment occurs years before the cash consequences are reflected in the budget. These costs should be considered at the time resource commitments are made. Building on past work, we are currently exploring these issues in greater detail. More broadly, timely, accurate, and useful financial information is essential for managing the government’s operations more efficiently, effectively, and economically; meeting the goals of financial reform legislation (such as the CFO Act); supporting results-oriented management approaches; and ensuring ongoing accountability. We have continued to point out that the federal government is a long way from successfully implementing the statutory reforms of the 1990s. Widespread financial management system weaknesses, poor recordkeeping and documentation, weak internal controls, and a lack of information have prevented the government from having the cost information needed to effectively and efficiently manage operations or accurately report a large portion of its assets, liabilities, and costs. Looking forward, it is appropriate to ask why all of this effort is worthwhile. Certainly making clear connections between resources, costs, and performance for programs is valuable. Improving evaluation capacity has the potential to create the demand to support further improvements. However, the real payoff will come in strengthening the budget process itself. The integration of budgeting and performance can strengthen budgeting in several ways. First, the focus on outcomes can broaden the debate and elevate budget trade-offs from individual programs to a discussion of how programs work together to achieve national goals. Although the evaluation of programs in isolation may be revealing, it is often critical to understand how each program fits with a broader portfolio of tools and strategies— such as regulations, direct loans, and tax expenditures—to accomplish federal goals. For example, in fiscal year 2000, the federal health care and Medicare budget functions included $319 billion in entitlement outlays, $91 billion in tax expenditures, $37 billion in discretionary budget authority, and $5 million in loan guarantees. (See fig. 3.) Achieving federal/national policy goals often depends on the federal government’s partners—including other levels of government, private employers, nonprofits, and other nongovernmental actors. The choice and design of these tools are critical in determining whether and how these actors will address federal objectives. GPRA required the President to prepare and submit to Congress a governmentwide performance plan to highlight broader, crosscutting missions, such as those discussed above. Unfortunately, this was not done in fiscal years 2003 and 2004; we hope that the President’s fiscal year 2005 budget does include such a plan. Second, a focus on performance can help us shift our view from incremental changes to an evaluation of the base itself. Making government adapt to meet the challenges of the future is broader than strengthening performance-informed resource decisions. Fiscal pressures created by the retirement of the baby boom generation and rising health care costs threaten to overwhelm the nation’s fiscal future. Difficult as it may seem to deal with the long-term challenges presented by known demographic trends, policymakers must not only address the major entitlement programs but also reexamine other budgetary priorities in light of the changing needs of this nation in the 21st century. Reclaiming our fiscal flexibility will require the reexamination of existing programs, policies, and activities. It is all too easy to accept “the base” as given and to subject only new proposals to scrutiny and analysis. As we have discussed previously, many federal programs, policies, and activities—their goals, their structures, and their processes—were designed decades ago to respond to earlier challenges. In previous testimony, we noted that the norm should be to reconsider the relevance or “fit” of any federal program, policy, or activity in today’s world and for the future. Such a review might ferret out programs that have proven to be outdated or persistently ineffective, or alternatively could prompt appropriate updating and modernizing activities through such actions as improving program targeting and efficiency, consolidation, or reengineering of processes and operations. This includes looking at a program’s relationship to other programs. Finally, and most critically, Congress must be involved in this debate and the resulting decisions and follow-up oversight activities. Congressional buy-in is critical to sustain any major management initiative, but 50 years of past efforts to link resources with results have shown that any successful effort must involve Congress as a partner given Congress’ central role in setting national priorities and allocating the resources to achieve them. In fact, the administration acknowledged that performance and accountability are shared responsibilities that must involve Congress. It will only be through the continued attention of Congress, the administration, and federal agencies that progress can be sustained and, more important, accelerated. Congress has, in effect, served as the institutional champion for many previous performance management initiatives, such as GPRA and the CFO Act, by providing a consistent focus for oversight and reinforcement of important policies. More generally, effective congressional oversight can help improve federal performance by examining the program structures agencies use to deliver products and services to ensure that the best, most cost-effective mix of strategies is in place to meet agency and national goals. As part of this oversight, Congress should consider the associated management and policy implications of crosscutting programs. Given this environment, Congress should also consider the need for processes that allow it to more systematically focus its oversight on programs with the most serious and systemic weaknesses and risks. At present, Congress has no direct vehicle to provide its perspective on governmentwide performance issues. Congress has no established mechanism to articulate performance goals for the broad missions of government, to assess alternative strategies that offer the most promise for achieving these goals, or to define an oversight agenda targeted at the most pressing crosscutting performance and management issues. Congress might consider whether a more structured oversight approach is needed to permit a coordinated congressional perspective on governmentwide performance matters. Such a process might also facilitate congressional input into the OMB PART initiative. For example, although the selection of programs and areas for review is ultimately the President’s decision, such choices might be informed and shaped by congressional views and perspectives on performance issues. How would “success” in performance budgeting be defined? Simply increasing the supply of performance information is not enough. If the information is not used—that is, if there is insufficient demand—the quality of the information will deteriorate and the process either will become rote or will wither away. However, for the reasons noted, the success of performance budgeting cannot be measured merely by the number of programs “killed” or a measurement of funding changes against performance “grades.” Rather, success must be measured in terms of the quality of the discussion, the transparency of the information, the meaningfulness of that information to key stakeholders, and how it is used in the decision-making process. If members of Congress and the executive branch have better information about the link between resources and results, they can make the trade-offs and choices cognizant of the many and often competing claims at the federal level. A comprehensive understanding of the needs of all participants in the budget process, including what measures and performance information are required at different stages of the budget cycle, is critical. Making performance budgeting a reality throughout the federal government will be facilitated by efforts to improve the structural alignment of performance planning goals with budget and cost accounting structures and presentations. However, developing credible performance measures and data on program results will be absolutely critical in determining whether the performance perspective becomes a compelling framework that decsion makers will use in allocating resources. Performance budgeting is difficult work. It requires taking a hard look at existing programs and carefully reconsidering the goals those programs were intended to address—and whether those goals are still valid. It involves analyzing the effectiveness of programs and seeking out the reasons for success or failure. It involves navigating through the maze of federal programs and activities, in which multiple agencies may operate many different programs, to address often common or complementary objectives. However, the task of revising and reforming current programs and activities that may no longer be needed or that do not perform well is fraught with difficulties and leads to real “winners” and “losers.” Notwithstanding demonstrated weaknesses in program design and shortfalls in program results, there often seems to be little “low hanging fruit” in the federal budget. In fact, some argue that because some programs are already “in the base” in budgetary terms, they have a significant advantage over new initiatives and new demands.
Since the Government Performance and Results Act (GPRA) was enacted in 1993, federal agencies increasingly have been expected to link strategic plans and budget structures with program results. The current administration has taken several steps to strengthen and further the performance-resource linkage by making budget and performance integration one of its five management initiatives included in the President's Management Agenda. GAO has reported and testified numerous times on agencies' progress in making clearer connections between resources and results and how this information can inform budget deliberations. The administration's use of the Program Assessment Rating Tool (PART) for the fiscal year 2004 President's budget and further efforts in fiscal year 2005 to make these connections more explicit, have prompted our examination of what can and cannot be expected from performance budgeting. Performance management is critical to delivering program results and ensuring accountability, but it is not without risks. Building on agencies' hard-won achievements in developing plans and measures, the government faces the challenge of promoting the use of that information in budget decision making, program improvement, and agency management. More explicit use of performance information in decision making promises significant rewards, but it will not be easy. Decision makers need a road map that defines what successful performance budgeting would look like, and identifies key elements and potential pitfalls. Credible performance information and measures are critical for building support for performance budgeting. For performance data to more fully inform resource allocation decisions, decision makers must feel comfortable with the appropriateness and accuracy of the outcome information and measures presented--that is, that they are comprehensive and valid indicators of a program's outcomes. Decision makers likely will not use performance information that they do not perceive to be credible, reliable, and reflective of a consensus about performance goals among a community of interested parties. The quality and credibility of outcome-based performance information and the ability of federal agencies to evaluate and demonstrate their programs' effectiveness are key to the success of performance budgeting. Successful performance budgeting is predicated on aligning performance goals with key management activities. The closer the linkage between an agency's performance goals, its budget presentation, and its net cost statement, the greater the reinforcement of performance management throughout the agency and the greater the reliability of budgetary and financial data associated with performance plans. Clearer and closer association between expected performance and budgetary requests can more explicitly inform budget discussions and shift the focus from inputs to expected results. The test of performance budgeting will be its potential to reshape the kinds of questions and trade-offs that are considered throughout the budget process. The real payoff will come in strengthening the budget process itself. The focus on outcomes potentially can broaden the debate and elevate budget trade-offs from individual programs to a discussion of how programs work together to achieve national goals. It is critical to understand how programs fit within a broader portfolio of tools and strategies for program delivery. Shifting perspectives from incremental budgeting to consideration of all resources available to a program, that is, base funding as well as new funds, potentially can lead to a reexamination of existing programs, policies, and activities. Prudent stewardship of our nation's resources is essential not only to meeting today's priorities, but also for delivering on future commitments and needs.
The TANF and CCDF programs are two of the nation’s key federal programs for assisting needy families with children and are an important component of states’ social services networks. These two programs each consist of more than 50 distinct state-level programs—one for each state, the District of Columbia, four territories, and numerous tribal entities. Annually, the federal government makes available to each state a portion of the (1) $16.5 billion TANF block grant that was established by PRWORA and (2) $4.8 billion from CCDF for child care subsidies and other related activities. Within HHS, the Administration for Children and Families (ACF) oversees states’ TANF and CCDF programs. Congress created TANF in 1996 to replace the decades-old Aid to Families With Dependent Children (AFDC) program that entitled eligible needy families to monthly cash assistance payments. PRWORA made sweeping changes to federal welfare policy, including ending individuals’ entitlement to aid, imposing time limits on the receipt of aid, and imposing work requirements on most adults receiving aid. This federal framework gives states the flexibility to design their own programs; define who will be eligible; establish what benefits and services will be available; and develop their own strategies for achieving program goals, including how to help recipients move into the workforce. PRWORA provides states substantial authority to use TANF funds in any way that is reasonably calculated to meet the goals of the program. As specified by PRWORA, TANF’s goals include ending the dependence of needy families on government benefits by promoting job preparation, work, and marriage; preventing and reducing the incidence of nonmarital pregnancies; and encouraging two-parent families. These broad goals represent a significantly broader scope than AFDC. PRWORA also expanded the scope of services that could potentially be contracted out, such as determining eligibility for TANF, which had traditionally been done by government employees. In addition to these programmatic changes, PRWORA dramatically changed the fiscal structure of the program and shifted significant fiscal responsibility for the program to states. Each year, the federal government makes a fixed amount of TANF funds available to each state, and a state may reserve some of these funds for use in the future. This represents a significant departure from past policy, under which the amount of federal funds received was linked to the size of each state’s welfare caseload. To receive their federal TANF funds, states must spend a specified amount of their own funds each year, referred to as state maintenance of effort. Along with granting states significant flexibility, PRWORA redefined HHS’s role in administration of the nation’s welfare system, limiting its regulatory and enforcement authority and reducing its staff level for administering TANF. Specifically, the law states: “No officer or employee of the Federal Government may regulate the conduct of States under this part or enforce any provision of this part, except to the extent expressly provided in this part.” The law also eliminated the quality control system that HHS used to measure payment accuracy of monthly welfare payments under AFDC. Under that system, states were required to statistically select a sample of cash assistance cases and determine the level of erroneous (improper) payments; if a state’s improper payment rate exceeded the targeted error rate, it faced a financial penalty. HHS states in the preamble to TANF regulations that PRWORA reflects the principle that the federal government should focus less attention on eligibility determinations and place more emphasis on program results. To that end, PRWORA gave HHS new responsibilities for tracking state performance, including a set of financial penalties for states that fail to comply with program requirements and a bonus program for states that perform well in meeting certain program goals. Several of these penalties reflect new expectations for states to assist recipients in making the transition to employment. For example, states face financial penalties if they do not place a minimum specified percentage of adult TANF recipients in work or work-related activities each year and if they provide federal TANF funds to families who have reached the TANF time limits on receipt of aid—60 months over a lifetime. The bonus program was to reward states for high performance toward achieving program goals, such as moving welfare recipients into jobs and reducing out-of-wedlock births. At the same time, Congress, through PRWORA, emphasized the importance of sound fiscal management for state TANF programs. One part of the new penalty system focused on penalties for states that use funds in violation of PRWORA, as identified through audits conducted under the Single Audit Act. In addition, the law stated that states are to include in the TANF plans that they file with HHS a certification that procedures are in place to combat fraud and abuse, although the law does not require the states to describe these procedures. Moreover, states are required to continue participating in the Income and Eligibility Verification System (IEVS) that provides information from various sources to help verify eligibility information. As state TANF programs have evolved since implementation, the nation’s welfare system now looks quite different than it did under AFDC, posing some challenges for defining and measuring improper payments. As our previous work has shown, welfare agencies now operate more like job centers, taking steps to move recipients into work and providing aid to help families avoid welfare. States now spend most TANF funds on a broad array of services for families rather than on monthly cash assistance, as shown in figure 1. These services include employment services, case management services, support services such as child care and transportation, and pregnancy prevention among others. In addition, states offer various services to other low-income families not receiving welfare, including child care and employment and training services. In addition to the broad range of services provided by TANF programs, more entities receive and administer TANF program funds than before, posing additional challenges for states in managing improper payments. In many states, county or local governments receive TANF funds and are the key TANF administrative agencies, sometimes establishing their own policies and programs. States may also distribute TANF funds to several different state agencies to provide services. States and localities also may contract with a multitude of nonprofit and for-profit organizations. In our 2002 report on TANF contracting, our survey to states identified more than 5,000 TANF contracts with nongovernmental organizations at the state level and at least 1,500 contracts at the local level. We also found that in 2001, about a quarter of states contracted out 20 percent or more of TANF funds expended for services in fiscal year 2000, ranging up to 74 percent. Figure 2 shows the broad range of services for which TANF payments are made and the entities involved in the TANF payment processes. PRWORA also combined several existing child care programs into one program designed to provide states with more flexible funding for subsidizing the child care needs of low-income families who are working or preparing for work. CCDF provides states funds to subsidize child care assistance for families with incomes up to 85 percent of state median income who are working or in education or training. Under CCDF rules, eligible participants are to be allowed parental choice of child care providers, including center-based, home-based, or relative care. In addition, families are required to contribute to the cost of care, in the form of a copayment, unless states exempt families below the poverty level from this requirement. CCDF rules also provide some guidance on establishing reimbursement rates for child care providers and requires that a specified portion of funds be set aside for activities designed to enhance child care quality. Within this framework, states establish their own income eligibility criteria and determine how the program will be administered. Like TANF, CCDF is administered through multiple agencies, including county and local governments and nonproft and for-profit organizations. This decentralized system can create challenges for determining what constitutes an improper payment. Figure 3 illustrates the steps often involved in making child care payments. In recent years, federal and state CCDF expenditures have increased more than 100 percent—from $4.0 billion in 1997 to $8.6 billion in 2002, the most recent year for which data are available. At the federal level, ACF’s Office of Family Assistance (OFA) is responsible for overseeing TANF, and the Child Care Bureau is responsible for overseeing CCDF. Staff in the 10 ACF regional offices and the Office of Financial Services also assist in overseeing aspects of state TANF and CCDF programs. Figure 4 shows ACF’s organizational structure. OFA is responsible for overseeing TANF and coordinating HHS efforts to assist states in managing improper payments in the TANF program. Specifically, the office is responsible for (1) developing and implementing strategies to assist grantees in implementing and designing programs to meet TANF purposes; (2) ensuring compliance with federal laws and regulations; (3) implementing national policy and developing regulations to implement new laws; (4) developing regulations to implement data collection requirements; (5) implementing and maintaining systems for the collection and analysis of data, including participation rate information, recipient characteristics, financial and administrative data, state expenditures on families, work activities of noncustodial parents, transitional services, and data used in the assessment of state performance; and (6) identifying best practices and sharing information through conferences, publications, and other means. The Child Care Bureau is responsible for overseeing CCDF programs and coordinating HHS efforts to assist states in managing improper payments in the CCDF program. The Bureau’s responsibilities include (1) tracking grantee program implementation by collecting and analyzing information that states are required to report through CCDF plans, financial expenditure reports, and administrative data reports; (2) providing technical assistance to grantees concerning CCDF through the Child Care Technical Assistance Network where the Bureau sponsors national and regional conferences and meetings and support the development of Technical Assistance materials and websites; (3) developing program policy guidance to grantees on the administration of CCDF, including questions related to what expenditures are allowable under the program; and (4) supporting research to disseminate findings that document emerging trends in the child care field. OFA and the Child Care Bureau share fiscal oversight responsibility with the 10 regional offices that are responsible for reviewing financial expenditure reports that states are required to submit as well as assisting in other program responsibilities. The Office of Financial Services is the HHS-designated lead unit for coordinating reporting on the agency’s efforts to manage improper payments in the TANF and CCDF programs. In November 2002, Congress passed the Improper Payments Act. The act requires the head of each agency to annually review all programs and activities that the agency administers and identify all such programs and activities that may be susceptible to significant improper payments. For each program and activity identified, the agency is required to estimate the annual amount of improper payments and submit those estimates to Congress before March 31 of the following applicable year. The act further requires that for any agency program or activity with estimated improper payments exceeding $10 million and 2.5 percent of program payments, the head of the agency shall provide a report on the actions the agency is taking to reduce those payments. The Improper Payments Act also required the Director of OMB to prescribe guidance to implement its requirements. OMB issued guidance on May 21, 2003, that provides instructions for estimating improper payment rates, and requires agencies to set target rates for future reductions in improper payments, identify the types and causes of improper payments, and highlight variances from targets or goals established. Significantly, the May 2003 guidance also required 15 agencies to publicly report improper payment information for 46 programs identified in OMB Circular No. A-11 in the agencies’ fiscal year 2003 Performance and Accountability Reports. According to OMB, the programs were selected primarily because of their large dollar volumes ($2 billion dollars or more in outlays). The TANF and CCDF programs are included in the 46 programs. Internal Control Framework In most cases, the cause of improper payments can be traced to a lack of or breakdown in internal control. Our Standards for Internal Control in the Federal Government provides a road map for entities to establish control for all aspects of their operations and a basis against which entities’ control structures can be evaluated. Also, our Executive Guide on Strategies to Manage Improper Payments: Learning from Public and Private Sector Organizations focuses on the internal control standards as they relate to reducing improper payments. The five components of internal control—control environment, risk assessment, control activities, information and communication, and monitoring—are defined in the Executive Guide in relation to improper payments as follows: Control environment—creating a culture of accountability by establishing a positive and supportive attitude toward the achievement of established program outcomes. Risk assessment—analyzing program operations to determine where risks of improper payments exist, what those risks are, and the potential or actual impact of those risks on program operations. Control activities—taking actions to address identified risk areas and help ensure that management’s decisions and plans are carried out and program objectives are met. Information and communication—using and sharing relevant, reliable, and timely financial and non-financial information in managing activities related to improper payments. Monitoring—tracking improvement initiatives over time, and identifying additional actions needed to further improve program efficiency and effectiveness. Improper payments in the TANF program can occur in all of the TANF payment types: ongoing monthly cash assistance payments to individuals or families; one-time payments to individuals or families; and payments made to a range of for-profits, non-profits, state agencies, and contractors. HHS has instructed states that they should recover any overpayments by recouping them from the recipients as a reduction in future TANF cash payments or by collecting cash repayments. It also states that the full amount of recovered overpayments made after October 1, 1996—PRWORA was signed into law in August 1996—is to be retained by the state and used for TANF program costs. Improper payments in the CCDF program can occur in all payment types: payments to child care providers or families. Almost all states we surveyed and visited reported taking some steps to assess whether their TANF and CCDF programs were at risk for improper payments or to measure the extent of improper payments. However, these efforts were uneven--not all states had assessed risks, risk assessments often did not cover all program payment types, and states’ measures of the amounts of improper payments did not always rely on rigorous methodologies. While these assessments provide some valuable information, they do not provide a comprehensive picture of the nature and extent of improper payments in TANF and CCDF programs among the 16 states. In addition, while the states reported they have various strategies and tools in place to help prevent and detect improper payments, these efforts were also uneven. While states understand the importance of addressing improper payments, they cited several factors that make it difficult for them to adequately manage improper payments. The unevenness of internal controls among states may result in missed opportunities to further address improper payments. Almost all the states we surveyed and visited reported performing some activities to assess whether their TANF and CCDF programs were at risk of improper payments. We defined a risk assessment as a formal or informal review and analysis of program operations. The purpose of a risk assessment is to determine where risks of improper payments exist, what those risks are, and the potential or actual impact of those risks on program operations. Conducting risk assessments helps to ensure that public funds are used appropriately and clients receive the proper benefits. Improper payments, including fraud, may occur in several different ways in the TANF and CCDF programs, involving clients, providers, and agency personnel. For example, an inadvertent error may result in an overpayment or underpayment when a client mistakenly fails to report some income, a provider accidentally receives payment due to a billing error, or a caseworker incorrectly records some information or makes an error in calculating a benefit amount. Improper payments due to fraudulent activity may occur, for example, when a client files for and receives benefits in two jurisdictions concurrently, a provider claims payment for services not rendered, or an agency employee creates a fictitious case and collects the benefit. In addition, a broad range of state entities may be involved in identifying improper payments and measuring the extent to which they occur. For overpayments and underpayments, these state entities may include frontline workers, quality control staff, or management staff. State entities involved in preventing and detecting fraud may include the state inspectors general offices, state fraud units, and state auditors. The 16 states we surveyed and visited reported a mix of risk assessment activities. These activities include state studies conducted under the Single Audit Act and other studies by state auditors, fraud units, and inspectors general. States also identified other activities, including reviews of program policies, one-time studies or pilots, and regular reviews of client cases. States generally reported more activities for TANF than CCDF programs. More specifically, TANF-related activities were more likely to include regular quality control reviews than CCDF activities, as might be expected given the requirements for the previous AFDC program. Table 1 provides some examples of states’ risk assessment activities. While states reported performing some risk assessment activities, these activities did not appear to be uniformly comprehensive in their coverage of all types of program payments. As shown in table 2, many of the states we surveyed said they had performed some type of an assessment or analysis of risk for three primary types of TANF payments, while others did not cover all of these payment types. Three states said they had assessed risks for monthly cash payments only. Data from HHS for fiscal year 2002 showed that in these three states, the percentage of TANF expenditures for cash assistance ranged from about 25 percent to more than 50 percent. (See app. I for each state’s percentage of TANF expenditures for cash assistance.) While fewer states reported assessing risk in payments to service providers, states typically have procedures in place to monitor these contracting activities, as we reported in our previous work. Most of the states we surveyed and visited reported taking steps to measure the extent of improper payments in their TANF and CCDF programs as part of their risk assessment activities, although the extent of these efforts was mixed. As shown in table 3, the surveyed states reported relying on a variety of methods to calculate their measures of improper payments. For the TANF program, four of the surveyed states (California, Maryland, Michigan, and Pennsylvania) as well as one site visit state (Texas) reported that they relied on a statistically representative sample to estimate an amount of improper payments, although these generally covered TANF monthly cash assistance payments only. Among the surveyed states, fewer reported estimating an amount of improper payments for the CCDF program than for the TANF program. Compared with TANF, CCDF measures of improper payments generally occurred on a more ad hoc basis, such as a one-time study or pilot effort that covered one jurisdiction of a state, and were less likely to result from regular reviews of cases. In one state we visited, child care officials said they estimated the amount of improper payments for the largest subsidized child care program but not the other three programs also supported with CCDF funds. Many of the states we visited and surveyed provided us data on the amount of improper payments in their TANF and CCDF programs, but these data do not provide a complete picture of the amount of payments in these states’ programs and cannot be used for comparisons among states. Too often, states’ assessment activities did not measure the amount of improper payments among all types of TANF payments, and therefore do not present a complete picture of improper payments. In addition, some state data included amounts based on overpayments to clients only while others also included underpayments to clients based on agency errors. In other cases, the amount included only those payments identified as fraudulent but not other types of improper payments based on inadvertent mistakes. As a result, data were not comparable across states. However, data on the amount of improper payments, can play an important role in states’ program management, helping them to identify program areas at risk so they can be addressed and to recover funds when possible. The following are some examples of these types of activities from the states we visited. In Texas, TANF program officials stated that the quality control unit and the fraud unit estimate the amount of improper payments, which include client error, agency error, and fraud. The quality control unit uses a statistically representative sample of cash payments to calculate improper payments and the fraud unit uses all claims established in the investigation system to estimate improper payments. Based on these methods, Texas officials estimated the amount of improper payments to be $6.3 million for the TANF program during fiscal year 2002. Furthermore, officials estimated that $5.7 million in improper payments were recovered that same year. In Illinois, child care program officials stated that suspected fraud cases are sent to the state Bureau of Investigations to be examined. In 2002, the Illinois Office of Inspector General completed 114 CCDF investigations, which identified $1,172,293 in overpayments. The office cited several examples of fraudulently received child care benefits, including the following: A client falsified her payroll information to qualify for child care assistance. The alleged overpayment was $27,203. A client falsified payroll information to qualify for child care and failed to report her true earnings. The child care overpayment totaled $45,174. In Virginia, child care program officials told us that they conducted a pilot study to assess the extent of fraud in the child care subsidy program. The pilot focused on 3 of the state’s 121 local social service offices. During the year-long pilot, a total of 28 fraudulent claims were identified, and based on these findings, officials determined that the savings that would accrue to the state would justify the costs of fraud monitoring. Child care officials identified several examples of fraudulent activity, including the following: A client failed to report income from a second job, that she was living with the child’s father, and the father’s earnings; the total household income made them ineligible for assistance. The total overpayment was $8,944. A provider submitted invoices for five siblings for child care provided during periods when the provider was not providing care and was not living near the children. The total overpayment was $14,931. States generally rely on information from risk assessment activities to identify the extent of program risks and to highlight problem areas. Officials in the states we surveyed responded that on the basis of their risk assessments, they did not perceive improper payments to be a great problem in either the TANF or CCDF programs. However, some CCDF officials reported improper payments as a moderate problem while none of the TANF officials did so, as shown in figure 5. As discussed previously, the nature and extent of states’ reported risk assessments varied greatly, and often did not cover all payment types. This suggests their overall program risk assessments were based on a limited perspective. While state officials did not see improper payments as a great problem, they had identified factors that contributed to improper payments in their programs, as shown in table 4. TANF respondents most often identified inaccurate information on income, earnings, and assets and clients not meeting participation requirements as factors contributing to improper payments. Inaccurate information on income, earning, and assets can occur, for example, when clients do not report income from employment or changes in earnings that they are required to report and that may affect the amount of their payments or basic eligibility for aid. For states’ child care programs, the surveyed officials identified factors associated with both clients and child care providers as contributing most frequently to improper payments, as shown in table 5. Officials in the states we visited identified examples of client- and provider-related problems. For example, Virginia CCDF officials identified several cases in which clients were no longer working or looking for work and therefore no longer eligible for a child care subsidy. Illinois officials cited several cases in which the provider gave inaccurate information on the amount of child care received. In one case, the provider billed the state for children she had stopped caring for, and in another case the provider billed the state for watching children during hours when the provider was actually working at another job. In addition to assessing a program’s risk of improper payments, states reported using other key aspects of an internal control system, including emphasizing accountability and using tools to prevent and detect fraud, although the extent of use varied among the states and was less widespread among CCDF programs. For example, states we surveyed sometimes used performance goals to instill a culture of accountability by working toward improvement and achievement of established program outcomes. Although improper payment estimates were incomplete (as noted in the previous section), table 6 shows that a majority of TANF programs and two CCDF programs surveyed had established goals for reducing improper payments. In addition, some states were required to generate reports on improper payments to senior government officials. This was also the case in one of the states we visited. Texas officials told us they have established statewide performance goals for reducing the TANF rate of improper payments and hold regional offices accountable for performance objectives. If regions fail to meet their objectives, they must draft and implement performance improvement plans, which are then monitored by state officials. Greater emphasis on reducing improper payments in state TANF programs likely stems from states’ experience under the former AFDC program in which the federal government had more guidance and requirements specifically related to improper payment levels. In contrast, state CCDF assistance programs do not share that history and generally do not have the same formal internal control elements in place as in TANF. For example, officials in Virginia told us TANF fraud is more under control than child care fraud because there are more institutional processes in place to manage improper payments. They noted these processes are holdovers from the old AFDC program, and pointed out that eligibility workers are more aware of improper payment activities in TANF because of the training they received under AFDC. Along these lines, CCDF officials in Virginia told us they do not have any performance goals or measures for reducing improper payments, and pointed out that internal controls aimed at reducing fraud for the CCDF program are relatively new. In addition to performance goals and reporting requirements, each TANF and CCDF program reviewed reported performing a variety of activities to verify the accuracy of information to determine client eligibility and the proper payment amount, as shown in table 7. For example, Illinois officials told us they verify among other things: income, assets, residency, relationship of members in household, age, school attendance, and child support payments, for all appropriate household members to determine TANF eligibility. In addition, any caseworker or member of the public who is suspicious of welfare fraud is encouraged to complete a one-page on-line form that is submitted to Illinois’ Office of Inspector General. Fraud investigations are then initiated, if warranted. As the list of activities in table 7 demonstrates, many CCDF programs report that they verify the accuracy of payments to providers as well as clients, although this occurs in a variety of ways given the flexibility provided to states under CCDF. All CCDF programs surveyed reported that they confirm the licensing status of regulated child care providers before payments are made and most conduct background checks for providers. For example, in Texas, CCDF funds are monitored in a two-tier system. CCDF funds are distributed by the state to 28 local boards that contract out the CCDF program. Contract monitors at the state level identify questionable costs from the boards, while contractors monitor the individual providers’ contracts at the local level. Some child care providers are not required to be licensed, and some CCDF officials reported having a more difficult time monitoring payments to these types of providers. These legal provider arrangements (referred to as unregulated or unlicensed providers) are generally established by parents and frequently involve care by a family member. Under CCDF, states are to allow parents to make their own decisions on the type of child care used, as long as they choose a legally operating provider. CCDF officials in Virginia told us there might be more potential for fraud among unregulated providers because the officials have little knowledge about unregulated providers, and do not feel they have enough tools in place to monitor the legitimacy of all unregulated providers. In addition to activities taken by states to help ensure initial eligibility, all states surveyed reported requiring additional check-ins with clients to ensure that their eligibility status has not changed (often referred to as a redetermination). Most states surveyed said that they require a redetermination at least once every 12 months for both programs, although the method of check-in is generally more flexible for the CCDF program. For example, the majority of TANF programs require clients to visit the TANF office in order to continue receiving benefits. Conversely, most CCDF programs allow clients to check in by phone, fax, e-mail, or mail. This difference may be explained by state welfare programs’ long history of requiring periodic office visits for families to continue receiving monthly checks. In contrast, the newer CCDF program can be characterized as an important support for working families not associated with traditional welfare and the welfare office. Virginia CCDF officials told us redetermination methods stem from the philosophy that clients should not have excessive requirements to meet agency representatives face-to-face. A CCDF official in Washington echoed this sentiment when she told us benefit interviews are never meant to interfere with a client’s work or training schedule. These views are consistent with CCDF’s objective to assist parents with child care so that they can enter or remain in the workforce. One specific activity all the states reported relying on to help identify accurate eligibility information was data sharing, although the extent of use varied. Data sharing, a key control activity, allows comparison of information from different sources to confirm initial and continuing client or provider eligibility. All states reported performing at least one data sharing activity; however, the amount of data sharing varies greatly between the TANF and CCDF programs. Among the states we surveyed, while the majority of TANF programs reported data matching with at least 10 sources, the CCDF programs reported data matching with significantly fewer sources. For example, while all of the TANF programs we surveyed reported sharing data with the state department of labor or employment security to ensure that clients are correctly reporting their income levels, only 3 of 11 CCDF programs reported doing the same. Appendix II summarizes data matching results from all surveyed states. The extent to which states reported using data sharing capabilities in TANF and CCDF programs varied by program, in part because state TANF programs are more likely to have automated information systems that can help them analyze large amounts of data from other sources. Some possible explanations for this difference may be the greater maturity of the TANF program and the existence of data sharing requirements for TANF that do not exist for CCDF. Additionally, under TANF’s predecessor (AFDC), the federal government funded a large portion of state-run automated computer system costs in earlier years. Recognizing the importance of automated systems in efficiently and accurately determining eligibility, Congress acted to encourage states to develop automated systems for the AFDC program by authorizing ACF to reimburse states for a significant proportion of their total costs to develop and operate automated eligibility determination systems that met federal requirements. Under PRWORA, states may use their TANF or CCDF funds for their automated system needs, although no specific federal requirements exist for these systems. The level of sophistication of data sharing practices varied in the states we visited. For example, CCDF officials in Washington have implemented a complex automated system that allows them to find duplicate payments. Another automated data sharing resource frequently used with TANF programs is the Public Assistance Reporting Information System (PARIS). PARIS helps states voluntarily share information on public assistance programs to identify individuals or families who may be receiving benefit payments in more than one state simultaneously. Almost half of the TANF programs surveyed participate in PARIS. No CCDF programs surveyed participated in PARIS because the project was designed especially for Medicaid, food stamps, and TANF. ACF officials said they are considering the possibilities of PARIS for the CCDF program. Not all data matching is done with automated systems however. Georgia CCDF officials told us they had conducted a match with Head Start to ensure that families are not being paid twice for child care. To conduct this match Head Start program officials provided CCDF administrators with a printed list of enrolled children, and officials cross-referenced the list to look for duplication. Officials noted that the process would have been more efficient if it were automated, but speculated that a lack of funding or on-going partnership may be reasons the process was not computerized. While states reported having implemented many prevention and detection tools to manage improper payments, it is difficult to determine the relative effectiveness of these efforts. If states routinely performed comprehensive risk assessments or rigorously measured improper payments, it would be easier to understand the effect of these efforts. Without such strategies, success of these initiatives cannot be quantitatively determined, and the return on investment is unknown. While the states visited and surveyed understand the importance of addressing improper payments, many cited factors that make it difficult for them to address improper payments. Table 8 highlights the most frequently cited factors and demonstrates that many concerns were similar for the TANF and CCDF programs. Factors frequently cited in both programs include competing demands for staff attention and the lack of staff working specifically on improper payments. Based on their survey responses, one reason states often face competing demands is because they place their greatest focus on key mission goals, such as moving TANF clients into employment and meeting clients’ child care needs. This is consistent with the transformation in the federal welfare program from a cash welfare entitlement program to an employment program. Officials in some of our site visit states noted that the shift from AFDC to TANF changed the focus of the program. For example, Washington state officials said the TANF program emphasizes assisting the recipient with the tools needed to obtain and maintain employment. Illinois state officials also identified activities other than payment accuracy as their primary focus in meeting TANF program goals, such as providing income supports including child care assistance and transportation. Related to these factors are states’ concerns about insufficient funding, with about half of the states citing this as a factor for TANF and CCDF. We also heard this concern from some of the state auditors we spoke with in site visit states; the auditor general in one state said that his office has not conducted any reviews of the TANF and CCDF programs outside of the single audit within the past few years, in part due to resource limitations and the loss of staff within the department. Among CCDF officials, survey respondents were also less likely to have focused on managing improper payments and more likely to have focused on other aspects of their program, such as matching clients with providers. For example, Kansas CCDF officials were concerned that policies and monitoring activities developed to prevent improper payments and fraud could become overly burdensome, thereby possibly limiting the quality of services they provide to the children and families they serve. Officials also cited a lack of staff dedicated solely to addressing improper payments as problematic for both the TANF and CCDF programs. For example, Illinois officials said they have fraud cases that are not investigated due to small staff ratios per case or loss of staff. Likewise, Virginia officials stated that there is a lack of investigator staff to pursue fraud cases. States’ concerns about how best to use limited resources highlight the importance of risk assessment as a key element of sound internal control systems. Risk assessment activities allow an organization to focus often limited resources on the most significant problem areas and determine where risks exist, what those risks are, and what needs to be done to address the identified risks. This helps to ensure that public funds are used appropriately and clients receive the proper benefits, thereby helping meet the program’s mission and goals. Officials also cited problems that were more prevalent in one program than the other. In the TANF program, officials expressed more concern about the reluctance of law enforcement to prosecute low dollar value cases. For example, TANF officials in Virginia told us about law enforcement officials’ reluctance to prosecute improper payment cases unless they reach a certain dollar amount. The commonwealth attorney in each county determines the threshold for prosecuting these cases. On the other hand, CCDF officials frequently cited their limited ability to use SSNs for data sharing as a problem. While the Social Security Act and implementing regulations require SSNs as a condition of eligibility for the TANF program, no such law exists for the CCDF program. States may not require SSNs for the CCDF program without violating the Privacy Act of 1974. States may request that applicants provide their SSNs but must make clear that supplying the numbers is not required as a condition of receiving services. HHS has told states they may use alternatives (such as a unique case identifier) to the SSN to verify non-applicant income and resources when determining eligibility and benefit levels of applicants. Regardless of HHS’s position on this issue, CCDF officials in Illinois reported that the inability to require SSNs presents the potential for fraudulent payments. Similarly, CCDF officials in Florida reported that they would like SSNs to be required at the federal level, because they believe the effectiveness of data sharing is limited when parents are allowed to report them voluntarily. On the other hand, at least one state we reviewed addressed this issue in its CCDF program by asking for SSNs, but noting that the provision of them is voluntary. This state said that clients provided SSNs in all but 2 percent of cases. In addition to the SSN issue, CCDF officials often cited insufficient funding as a factor that hinders their efforts to address improper payments. For example, Washington state CCDF officials said they do not have enough money to improve improper payment identifications and recoveries because CCDF rules cap administrative costs at 5 percent of the grant, and improper payment identification is a very labor-intensive process. Similarly, Virginia CCDF officials told us the reason they do not have enough staff dedicated to addressing improper payments is a result of the funding restrictions imposed by the CCDF’s administrative cap. While some states saw the administrative cap as a limitation, others did not. Nationwide, the average portion of total funds spent on administrative costs in the CCDF program is about 3 percent. In addition, states may structure their programs to use state maintenance of effort funds (required to receive a portion of their CCDF funds) for these costs because no administrative cap exists on these state funds. ACF officials explained that some activities related to identifying and addressing improper payments may not be considered administrative activities to be included under the cap. For example, eligibility determination and redetermination, training of child care staff, and the establishment and maintenance of computerized child care information systems are not to be considered administrative activities, and these activities can play an important role in states’ efforts to combat improper payments. At the same time, CCDF regulations state that activities such as program monitoring; audit services, including coordinating the resolution of audit and monitoring findings; and program evaluation are considered administrative. States’ choices about how they design and structure their internal control activities affect the extent to which the administrative cap may limit their efforts. HHS relies on the single audit process and financial expenditure reporting to monitor state compliance with federal guidelines and oversee whether states expend federal funds properly. These mechanisms, however, do not capture information on the various strategies and tools that states have in place for managing improper payments. In the absence of such information, HHS cannot adequately determine if the TANF and CCDF programs are susceptible to significant improper payments, as required by the Improper Payments Act. HHS officials acknowledge that they will need information on state activities to manage improper payments if they are to comply with the Improper Payments Act. As a result, HHS recently started several projects to collect information from selected states. HHS also initiated several projects to encourage state use of certain tools in managing improper payments, such as data matching capabilities. Several states in our review reported that they would like additional assistance from HHS in identifying effective practices for managing improper payments. While HHS’s projects are a good start, they do not provide mechanisms to gather information on state control activities on a recurring basis. The absence of such mechanisms could hinder HHS’s ability to assess the extent to which program payments may be at risk and comply with the Improper Payments Act. HHS is required to annually review the TANF and CCDF programs to determine if they are susceptible to significant improper payments. The Improper Payments Act also requires agencies to estimate the amount of improper payments if a program is determined to be susceptible to significant improper payments. HHS needs information on the various controls that states have in place to minimize improper payments in order to adequately assess risk. In preparing its 2004 review of TANF and CCDF, HHS used findings from single audit reports, the key activity that HHS relies on to monitor state fiscal activities. Single audits assess whether states have complied with requirements in up to 14 managerial or financial areas, including allowable activities, allowable costs, cash management, eligibility, reporting, period of availability of funds, procurement and subrecipient monitoring. Audit findings in many of these areas often identify control weaknesses that can lead to improper payments. Based on an analysis of single audit findings, particularly findings related to eligibility and allowable cost, HHS concluded in its January 2004 review that there were no systemic problems or improper payment trends in the TANF and CCDF programs. HHS also concluded that only a very small percentage of program costs have been classified as misspent funds based on the rate of questioned costs included in the Single Audit reports, which according to HHS, has been less than .1 percent of program costs in recent years. While single audit findings as well as the amounts of unallowable or questioned costs that the audits identify are useful in determining the potential for improper payments in the TANF and CCDF programs, the audits are not designed to provided a complete description of the methods and activities that the states use to minimize improper payments. Questioned costs identified in single audits are also not intended to provide an estimate of the total amount of improper payments, and the methods used to derive questioned costs are not consistent among state auditors. For example, we observed variation in the methods that auditors used to identify questioned costs when testing whether TANF payments are accurate according to states’ eligibility and payment criteria. In reviewing the fiscal year 2002 and 2001 single audit reports for the five states we visited, we noted that some samples were selected statistically so that any questioned costs could be projected to all TANF payments and others were not. Also, some auditors determined that payments were improper if case files were missing or incomplete while others identified improper payments based on the specific eligibility criteria that clients failed to meet. HHS also reported that it considered information from its reviews of state expenditure reports in determining if TANF and CCDF payments were susceptible to significant improper payments. Federal guidelines require states to report on the expenditure of TANF and CCDF funds on a quarterly basis. HHS reported that its review of these reports helps to ensure that states are properly expending TANF and CCDF funds. However, regional office staff said that few resources are devoted to financial expenditure reviews and that the reviews are limited in identifying improper payments because expenditures are reported on a summary level and states are not required to submit detailed financial reports that they would need to identify improper payments. As a result, these reviews provide little useful information in assessing the risk of improper payments. Also, HHS reported that it gains access to information about state practices and activities from the TANF and CCDF plans that PRWORA requires states to submit to HHS, although this information is not used directly to monitor state fiscal activities. The state plans describe the practices that states use to meet the key objectives and federal requirements of the TANF and CCDF programs. Further for TANF plans, states are required to certify that they have procedures in place to combat fraud and abuse. However, states are not required to describe these procedures in their TANF plans. Similarly, CCDF plans do not require states to describe the procedures that they have in place to combat fraud and abuse but HHS officials report that they often gain an understanding of state procedures in reviewing and approving these plans. HHS officials acknowledged that HHS’s monitoring activities do not provide enough information to determine if TANF and CCDF programs are susceptible to significant improper payments. In our most recent report on governmentwide improper payments initiatives, we reported that HHS did not include information on TANF and CCDF improper payments in its Performance and Accountability Reports for fiscal year 2003, as required by OMB guidance for implementing the Improper Payments Act. The TANF and CCDF programs are among the 46 programs that OMB required agencies to report the results of their improper payment efforts in the Management Discussion and Analysis section of their accountability reports for fiscal year 2003. Specifically, we reported that HHS did not report improper payment amounts, initiatives to prevent and reduce improper payments, or impediments to preventing or reducing them. HHS has started several initiatives intended to collect more information on state efforts to control TANF and CCDF improper payments. HHS has also started several initiatives to assist states in managing improper payments and to encourage state use of certain tools to minimize improper payments, such as data matching capabilities. These initiatives should help HHS begin to assess the risk of improper payments and send a strong signal to states that managing improper payments is an important issue. They should also help states understand that the information they provide HHS on the strategies and tools that they have in place to manage improper payments is critical to determining whether these programs are susceptible to significant improper payments. HHS’s initiatives to collect more information on state CCDF programs are under way, and HHS is already starting to compile the results. HHS officials developed the CCDF initiative in September 2003. The overall goals of the initiative are to improve monitoring and administration regarding improper payments and fraud, provide better definitions of child care errors and child care fraud, and gather documented “best practices.” HHS officials also expect to identify other technical assistance materials and any new information reporting needs for the states. As part of the CCDF initiative, HHS recruited a state agency official with experience in program integrity to help the Child Care Bureau oversee the initiative. According to HHS officials, key actions for completing the initiative include: Working with selected states to determine whether there is an effective and cost efficient approach or methodology for estimating improper payment amounts in the CCDF program. Conducting visits to some of the selected states to observe the internal control and other activities they have in place to manage improper payments. Coordinating with the HHS Office of the Inspector General to provide training and technical assistance on improper payments and fraud to state CCDF officials. Coordinating with the United Council on Welfare Fraud and the American Public Human Services Association to discuss child care fraud and other issues. HHS is working with 11 states (Arkansas, Connecticut, Georgia, Indiana, Maryland, Ohio, Oklahoma, Oregon, South Carolina, Virginia, and Wisconsin) on the project. According to HHS officials, these 11 states provide experience in dealing with erroneous payments, knowledge of the capacity of their automated systems, and strong working relationships among key state agencies. In addition, both centralized and county-based organizational structures are represented in the 11 states. HHS held initial meetings with the 11 states during November 2003, in Washington, D.C. State officials such as child care administrators, fraud directors, quality assurance directors, auditors, and investigators participated in the meetings along with HHS Child Care Bureau and regional office staff. During the meetings, states discussed various approaches to controlling errors and fraud. In addition, the Child Care Bureau has conducted a number of conference calls with states, including one on PARIS. Since the November meeting, HHS has completed site visits to two states, Connecticut and Arkansas, and plans to complete visits to three other states—Indiana, Ohio, and Oklahoma—by the end of June 2004. HHS officials told us that they would compile all of the information from their visits into a report to analyze and identify possible options for estimating payment errors in the CCDF program and for improving program integrity. HHS expects to issue its report by September 2004. HHS has developed plans to implement three projects aimed at improving its monitoring activities for TANF and assistance to states. HHS is actively working with OMB on its implementation plans for the TANF projects to ensure that they strike the right balance between the authority that HHS has to oversee TANF, as set forth by PRWORA, and the requirements of the Improper Payments Act. The first project involves asking two states to volunteer for an expanded single audit review of their TANF programs by state auditors. Auditors are expected to conduct more detailed examinations of certain state controls, such as those used to determine that payments are in accordance with eligibility criteria and those controls used to oversee payments to entities that states contract with to provide TANF services. While this project only includes two states, HHS hopes to gain detailed knowledge of the adequacy of controls that states have in place to identify improper payments in all payment types. HHS said it plans to evaluate the first-year results of the project, report the information to OMB, and then decide upon second-year initiatives based on the initial results. According to HHS, it must still secure funding for these audits and obtain agreement from state auditors to perform the additional work. HHS is working with its Office of Inspector General to identify states to participate in the pilot project. The second TANF project involves collecting and sharing information on state activities to address improper payments. HHS is drafting a letter to states asking them for information on their “best practices” for addressing improper payments. HHS says the letter will request that states describe how they define improper payments in the state, the process used to identify such payments, and what actions are taken to reduce improper payments. HHS noted that the letter will make clear that a state's submission is voluntary. HHS also said it is working with OMB to ensure that the letter is in accordance with the oversight authority that HHS has under PRWORA and requirements under the Paperwork Reduction Act of 1995. According to HHS, it plans to establish a repository for the state submissions, which would be available to all states for viewing on an HHS Web site. The third project involves encouraging more states to use PARIS. PARIS is the interstate match program that was initiated to help state public assistance agencies share information to identify individuals or families who may be receiving or may have duplicate payments improperly made on their behalf in more than one state. In 2001, we reported on the usefulness of PARIS in identifying improper payments in the TANF program along with other programs for low-income individuals, such as food stamps and Medicaid. Currently only 22 states participate in PARIS. Other states reported that they do not participate in PARIS for various reasons, including the lack of data showing that participating would produce savings for their state. ACF officials say they have promoted state awareness of PARIS at conferences and ACF staff currently participate as members of the PARIS board of directors. In addition, HHS’s proposed fiscal year 2005 budget includes $2 million for PARIS activities. HHS plans to use $500,000 of the $2 million for contractor support to conduct an evaluation of participant states' PARIS activities to (1) establish a valid and reliable method for calculating the costs and benefits of participating in PARIS and (2) disseminate data on cost and benefits to other states. HHS also plans to devote a full-time equivalent position to manage the PARIS project. In carrying out these projects for TANF and CCDF, HHS expects to also provide more assistance to states in managing improper payments. Several states that we surveyed said they would like additional assistance from HHS in this area. We specifically asked states the following: To what extent, if any, have you received assistance from HHS (regional or headquarters) regarding identifying and managing improper payments in your state’s TANF and CCDF programs--assistance such as responses to state queries, any written guidance, any Web-based HHS information, conference, presentation, etc.? Many of the states we surveyed reported that they did not receive assistance from HHS regarding managing improper payments. As figure 7 shows, states reported that HHS generally provided little to no assistance for the CCDF program and moderate to some assistance for the TANF program on this topic. Several states said they would like additional assistance from HHS in managing improper payments. We also asked states if they would like assistance from a variety of national organizations, recognizing that other organizations play an important role in advising states on how to operate their TANF and CCDF programs. TANF officials most frequently indicated they would like assistance from the National Council of State Human Services Administrators (NCSHS) and the United Council on Welfare Fraud (UCOWF), while the CCDF officials primarily wanted assistance from the National Child Care Information Center (NCCIC). Regarding assistance from HHS, most states indicated that they would like additional assistance identifying and disseminating promising practices for managing improper payments, as figure 8 illustrates. Additionally, most CCDF programs reported that they would like HHS to provide guidance on what the federal law requires and allows with respect to improper payments. The projects for TANF and CCDF should help improve HHS monitoring activities as well as assistance to states. If successfully implemented, the projects will begin to provide HHS with a baseline of information on the various controls that states have in place for managing improper payments and thus improve HHS’s ability to determine if the TANF and CCDF programs are susceptible to significant improper payments. However, HHS projects do not provide mechanisms to gather information on state control activities on a recurring basis. The absence of such mechanisms hinders HHS’s ability to adequately assess the risk of improper payments and assist states in managing improper payments in these multi-billion dollar programs on an ongoing basis. The extent to which the TANF and CCDF programs are vulnerable to improper payments cannot be determined given the information currently available nationwide and in the 16 states we reviewed. Given the dollar magnitude of these programs—about $34 billion in federal and state funds—and the nature of their activities, we know that potential risks exist. We also know—based on our review of the 16 states--that states have some prevention and detection tools in place and at least some understanding of the extent of program risks, although some unevenness exists among states and between the TANF and CCDF programs in these areas. What is not known, however, is the extent to which states’ internal control systems are sufficient to protect these programs against an unnecessarily high level of improper payments. While we acknowledge that states have a great deal of discretion in TANF and CCDF, HHS continues to have a fiduciary responsibility to ensure that states properly account for their use of federal funds and maintain adequate internal controls over the use of funds. In addition, it has requirements under the Improper Payments Act to assess the significance of risks for improper payments, which it cannot do with the information currently available. As a result, HHS needs mechanisms to gather information on state control activities on a recurring basis. HHS may determine that it needs legislative action in obtaining information from states. HHS may also require a shift in resources or additional resources to implement its efforts. It is essential that HHS move ahead with and expand its actions to better understand the internal control systems that states have in place and the extent to which program payments may be at risk. It can also play an important role in exploring the usefulness of expanding data sharing systems like PARIS to state CCDF programs. In the short term, program funds lost to fraud and abuse or used to support ineligible families mean other needy families cannot be helped. In the longer term, it means that federal resources may not be used as effectively and efficiently as possible to meet important federal goals. Insufficient attention to addressing improper payments can erode public confidence in and support for these programs. As HHS moves forward, attention must be paid to carefully balancing the flexibility allowed states under law and the need for accountability for federal funds. To better assist states in managing improper payments in the TANF and CCDF programs and comply with the Improper Payments Act, we recommend that the Secretary of Health and Human Services direct the Assistant Secretary of ACF to take the following four actions: Develop mechanisms to gather information on a recurring basis from all states on their internal control systems for measuring and minimizing improper payments. Follow through on efforts to identify practices that states think are effective in minimizing improper payments and facilitate sharing of these with other states. Where appropriate, partner with states to assess the cost-effectiveness of selected practices. Explore the feasibility of expanding PARIS to include CCDF, in addition to TANF, including a study of the cost-effectiveness of such a plan. In recommending these approaches, we recognize that HHS may determine that it needs legislative action to direct states to provide the information. We also recognize that these approaches may require a shift in resources or additional resources. ACF provided written comments on a draft of this report; these comments appear in appendix III. It also provided technical comments that we incorporated as appropriate. We also provided a draft of the report to the American Public Human Services Association, the professional organization of state welfare officials, which provided technical comments that we also incorporated as appropriate. In its comments, ACF said that the report provides HHS with new and useful information. It also expressed concerns about our recommendation for collecting information on state internal controls as it relates to the TANF program and said we did not address its ongoing initiatives. Regarding CCDF, ACF said it welcomed our examination of improper payments in CCDF and added that our work complements its ongoing initiative to examine state efforts to address improper payments. While it did not specifically state that it agreed with our recommendations as they pertain to CCDF, it noted that its new efforts to examine child care improper payments are still in the early stages and it is committed to considering a wide range of options for possible next steps. ACF also noted that our findings on states' views about the level and usefulness of ACF technical assistance related to improper payments may not reflect its recent and growing level of effort it provides states in this area. We generally spoke with and surveyed states between December 2003 and February 2004. As a result, the time period of our review would not cover ACF's most recent efforts. Regarding TANF, ACF agreed that new and improved information from states would enable HHS to better help states address improper payments. It also stated, however, that it believed that the assessment of risk called for under the Improper Payments Act must be made within the statutory framework of the TANF program, which places constraints on ACF to regulate state TANF programs. Within this statutory framework, ACF thinks its plan for acquiring additional information and assessing risk is adequate. It also expressed concern that the draft report did not adequately portray the regulatory constraints, particularly in its summary sections. In the draft report, we clearly stated the regulatory restrictions and noted that HHS may need to pursue additional legislative authority to collect the information needed on state internal control systems to assess program risk levels. We have added more of this information to our summary sections. We also recognize, and discuss in the draft report, that ACF has plans to ask states to provide voluntarily more information on their efforts to address improper payments in order to share that information with all states. We agree that this is an important effort; we found that states in our review often reported wanting more assistance from HHS on identifying promising practices in this area. However, ACF will need to expand upon this effort or pursue additional strategies to ensure it has information of sufficient detail to gain an understanding of states' internal control systems. Its current data collection strategy is not likely to lead to information of sufficient detail to adequately assess the risk of improper payments on a recurring basis. In addition, ACF said the draft did not address the relevant initiatives it has undertaken or will undertake during fiscal years 2004 and 2005 and it provided information on these initiatives. We disagree. Our draft discussed all of the initiatives for the CCDF and TANF programs that ACF noted in its comments. We did, however, enhance portions of the discussion based on information provided by ACF in its comment letter. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. At that time, we will send copies of this report to the Secretary of Health and Human Services and others who are interested. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staff have any questions about this report, please contact Linda M. Calbom on (202) 512-9508 or Calboml@gao.gov or Cynthia M. Fagnoni on (202) 512-7215 or Fagnonic@gao.gov. Additional GAO contacts and acknowledgments are provided in appendix IV. We designed our study to provide information on (1) what selected states have done to manage improper payments in the Temporary Assistance for Needy Families (TANF) and Child Care and Development Fund (CCDF) programs, and (2) what the Department of Health and Human Services (HHS) has done to assess risk and assist states in managing improper payments in these programs. To obtain information about these objectives, we developed a data collection instrument for state TANF directors and a separate one for state CCDF administrators, conducted in-person interviews with state TANF and CCDF program officials and state fraud officials, conducted telephone interviews with state auditors, reviewed information from our prior work, and conducted work at the federal level. In addition, we interviewed or consulted officials with professional associations including the American Public Human Services Association and the United Council on Welfare Fraud. We provided a draft of this report to APHSA and HHS. HHS’s comments are included in appendix III and technical comments from HHS and APHSA were incorporated as appropriate. We conducted our work from April 2003 through May 2004 in accordance with generally accepted government auditing standards. To obtain information for this report, we judgmentally selected 16 states that reflect variations in the following characteristics: geographic location, level of TANF and CCDF program expenditures, and size of population. As part of our analysis, we sent data collection instruments to 11 states: California, Colorado, Florida, Idaho, Kansas, Maryland, Michigan, New Mexico, New York, Ohio, and Pennsylvania. We also visited 5 other states: Georgia, Illinois, Texas, Virginia, and Washington. Table 9 provides information on the amount of TANF expenditures for the 16 states in our review and each state’s TANF expenditure as a percentage of the U.S. total. The table also shows that together these states represent about 70 percent of total U.S. TANF expenditures. Table 10 provides information on the number of families and children served by the TANF program and the percentage of TANF expenditures attributed to cash assistance payments for the 16 states in our review. Table 11 provides information on the amount of CCDF expenditures, average number of children served, and the state CCDF expenditure as a percentage of the U.S. total for the 16 states in our review. The table also shows that together these 16 states represent almost 60 percent of total U.S. CCDF expenditures. Table 12 provides information on the number of providers operating in the selected states we reviewed and the percentage of those providers operating without regulation. Some limitations exist in any methodology that gathers information about programs undergoing change, such as those included in this review. Although we did not collect information on the entire population of states and therefore cannot generalize our findings beyond the 16 states in our review, we have used the information for descriptive/illustrative purposes. To obtain information on what selected states have done to manage improper payments in the TANF and CCDF programs, we surveyed states using a data collection instrument (DCI) for each program in 11 states. These DCIs were identical in many respects to allow comparisons between the two programs; the instruments differed to the extent necessary to capture different conditions and factors in each program. We pretested the instruments in two states with the key TANF and CCDF officials responsible for program administration and program integrity. In addition, we showed the instruments to and received input from Administration for Children and Families (ACF) officials at HHS. Separate data collection instruments were mailed to TANF directors and Child Care administrators in December 2003, and follow-up phone calls were made to state TANF and CCDF officials whose DCIs were not received by January 9, 2004. We addressed DCIs to each state TANF director and child care administrator and requested he or she to consult with other state officials who were most familiar with efforts taken to manage and identify improper payments to complete the DCI. We received responses from all 11 of the state TANF directors and 11 child care administrators, although each state did not respond to all questions. We did not independently verify the information obtained through the DCI, other than for specific dollar amounts for which we asked states to provide documentation. Data from the DCIs were double-keyed to ensure data entry accuracy and were independently verified. In addition, the information was analyzed using approved GAO statistical software (SAS). The DCIs included questions on an assessment of risk to decide the nature and extent of improper payments in the TANF and CCDF programs; other actions taken to prevent, identify, and reduce improper payments, including fraudulent payments in the TANF and CCDF programs; and assistance and guidance from HHS and other sources. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the survey instrument, the data collection, and the data analysis to minimize these nonsampling errors. For example, a survey specialist designed the survey instrument in collaboration with GAO staff with subject matter expertise. Then, as stated earlier, it was pretested to ensure that the questions were relevant, clearly stated, and easy to comprehend. When the data were analyzed, a second, independent analyst checked all computer programs. To obtain information about each assignment objective and, in particular, an understanding of the steps states have taken to identify and address improper payments, we interviewed state officials in Georgia, Virginia, Illinois, Texas, and Washington. We met with state TANF, CCDF, and fraud officials in these states. The interviews were administered using an interview guide that included questions similar to those on the DCIs. To obtain additional perspectives on TANF and CCDF mechanisms to manage improper payments, we conducted observations at local offices in the following locations: Springfield, Illinois; Austin, Texas; and Tumwater, Washington. In addition, we interviewed state auditors in the 5 states we visited and we analyzed state single audit reports conducted under Office of Management and Budget’s (OMB) Circular A-133 for 15 of the 16 states in our review. We also reviewed documents provided by states that described their programs and internal control systems and that corroborated any data officials provided on the amounts of improper payments. Review of Federal Role To identify steps HHS has taken to assess risk and assist states in managing improper payments in the TANF and CCDF programs, we identified and reviewed policies and procedures that described HHS’s oversight activities; observed key oversight activities at an HHS regional office; reviewed documents, plans, and strategies for identifying improper payments; and interviewed ACF finance and program officials. We also reviewed results of audits done under OMB’s Circular No. A-133 and the Single Audit Act. Other human services programs in agency/ state State department of labor or employment security State directory of new hires State department of motor vehicles Public Assistance Reporting Information System Prisons and criminal justice agencies at state level Other providers of services, education, and training Social Security Administration (SSA) form W-2 (wage statements) Elspeth Grindstaff, Amanda Mackison, Kathryn Peterson, Cynthia Teddleton, and Kris Trueblood made major contributions to this report. Jerry Sandau provided technical assistance in analyzing data. Financial Management: Fiscal Year 2003 Performance and Accountability Reports Provide Limited Information on Governmentwide Improper Payments. GAO-04-631T. Washington, D.C.: April 15, 2004. Financial Management: Effective Implementation of the Improper Payments Information Act of 2002 Is Key to Reducing the Government's Improper Payments. GAO-03-991T. Washington, D.C: July 14, 2003. Single Audit: Single Audit Act Effectiveness Issues. GAO-02-877T. Washington, D.C.: June 26, 2002. Welfare Reform: Federal Oversight of State and Local Contracting Can Be Strengthened. GAO-02-661. Washington, D.C.: June 11, 2002. Welfare Reform: States Provide TANF-Funded Work Support Services to Many Low-Income Families Who Do Not Receive Cash Assistance. GAO- 02-615T. Washington, D.C.: April 10, 2002. Single Audit: Survey of CFO Act Agencies. GAO-02-376. Washington, D.C.: March 15, 2002. Human Services Integration: Results of a GAO Cosponsored Conference on Modernizing Information Systems. GAO-02-121. Washington, D.C.: January 31, 2002. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can Be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Strategies to Manage Improper Payments: Learning From Public and Private Sector Organizations. GAO-02-69G. Washington, D.C.: October 2001. Public Assistance: PARIS Project Can Help States Reduce Improper Benefit Payments. GAO-01-935. Washington, D.C.: September 6, 2001. Welfare Reform: Challenges in Maintaining a Federal-State Fiscal Partnership. GAO-01-828. Washington, D.C.: August 10, 2001. Medicaid: State Efforts to Control Improper Payments Vary. GAO-01-662. Washington, D.C.: June 7, 2001. The Challenge of Data Sharing: Results of a GAO- Sponsored Symposium on Benefit and Loan Programs. GAO-01-67. Washington, D.C.: October 20, 2000. Benefit and Loan Programs: Improved Data Sharing Could Enhance Program Integrity. GAO/HEHS-00-119. Washington, D.C.: September 13, 2000. Standards for Internal Control in the Federal Government. GAO/AIMD- 00-21.3.1. Washington, D.C.: November 1999. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Minimizing improper payments is important given the dollar magnitude of the Temporary Assistance for Needy Families (TANF) and Child Care and Development Fund (CCDF) programs--about $34 billion in federal and state funds expended annually. These block grants support millions of low-income families with cash assistance, child care, and other services aimed at reducing their dependence on the government. At the federal level, the Department of Health and Human Services (HHS) oversees TANF and CCDF. Within states, many public and private entities administer these programs and share responsibility for financial integrity. GAO looked at (1) what selected states have done to manage improper payments in TANF and CCDF and (2) what HHS has done to assess risk and assist states in managing improper payments in these programs. To address these questions, GAO judgmentally selected states that varied in geographic location and program size. GAO used a survey to collect consistent information from 11 states and visited 5 states. The 16 states in GAO's review reported using various strategies and tools to manage improper payments, but their efforts were uneven. Almost all the states in the review reported that they performed some activities to assess whether their programs were at risk of improper payments. These activities, however, did not always cover all payments that could be at risk, focusing, for instance, on cash welfare payments but not on payments for services, which were more than half of all TANF payments in certain states. As a result, the assessments do not provide a comprehensive picture of the level of risk in these state programs, which would be useful to HHS as it takes steps to address requirements under the Improper Payments Act. States also reported using a variety of prevention and detection tools to protect against improper payments, but states reported fewer tools in place for CCDF than for TANF, particularly in the area of data sharing to verify eligibility. Although the states in GAO's review recognized the importance of addressing improper payments, they cited competing demands for staff attention and resource limitations that constrained their efforts. While addressing improper payments does involve costs, comprehensively assessing risks can help focus prevention and detection efforts on areas at greatest risk. HHS reported using information from its monitoring activities, including single audits and state financial expenditure reporting to determine if the TANF and CCDF programs are at risk of improper payments. We found however, that these activities do not capture information about the various strategies and tools that states have in place for managing improper payments, such as those we observed in our review. In the absence of such information, HHS cannot determine if the TANF and CCDF programs are susceptible to significant improper payments, as required under the Improper Payments Act. HHS officials acknowledged that they needed more information to be in a position to carry out their responsibilities under the act and therefore recently initiated several projects to gain a better understanding of state control activities. However, HHS's projects do not provide mechanisms to gather information on a recurring basis. The absence of such mechanisms hinders HHS's ability to adequately assess the risk of improper payments and assist states in managing improper payments in these multibillion dollar programs on an ongoing basis. Given the statutory framework of the TANF program, GAO recognizes that HHS may determine that it needs legislative action to direct states to provide the information it needs to take this approach.
Faced with a goal of increasing the Department’s investments in modernization without increasing overall defense budgets, DOD has recently focused on the cost of support operations and their associated infrastructure, with the objective of finding ways to provide required support resources and capability at reduced costs. DOD recognizes that portions of its support structure are inefficient and continue to absorb a large share of the defense budget. To the extent support costs can be reduced, available future defense dollars could be used for modernization or other defense priorities. The Office of the Secretary of Defense (OSD) requested that DSB identify DOD activities that the private sector could do more efficiently and to determine the expected savings from outsourcing. DSB, a civilian advisory board to DOD, issued two reports in 1996 addressing outsourcing and other opportunities for substantially reducing DOD support services. The first focused solely on outsourcing and privatization issues. The second, incorporating findings from the earlier report, had a broader scope that included other methods for reducing infrastructure costs. In preparation for the Quadrennial Defense Review (QDR), OSD’s Program Analysis and Evaluation (PA&E) directorate assessed the DSB’s savings estimates from the second report. Our analysis also focused on the second report’s findings and recommendations. The first DSB task force concluded that DOD could realize savings of 30 to 40 percent of logistics costs and achieve broad improvements in service delivery and responsiveness by outsourcing support services traditionally done by government personnel. The report cited evidence from the Center for Naval Analyses (CNA) public-private competition studies of commercial and depot maintenance activities. The Board also noted that an Outsourcing Institute study found that the private sector saved about 10 to 15 percent by outsourcing but that the public sector savings from outsourcing would be higher because of the inefficiency of government service organizations. The DSB task force stated that an aggressive DOD outsourcing initiative could generate savings ranging from $7 billion to $12 billion annually by fiscal year 2002. Building on the earlier study, DSB’s second task force report provided a new vision wherein DOD would only provide warfighting, direct battlefield support, policy- and decision-making, and oversight activities. All other activities would be done by the private sector. DSB said that DOD would need to make an investment of about $6 billion but would ultimately save about $30 billion annually by the year 2002, primarily through outsourcing support functions. Of these $30 billion in annual savings, $6 billion was to come from CONUS logistics infrastructure activities, which DSB defined as including inventory control points, distribution depots, maintenance depots, and installation supply and repair. About $4.2 billion of the savings would be achieved by outsourcing these activities; the remaining $1.8-billion savings would be achieved through improvements in inventory management practices and equipment reliability. Table 1 shows a breakout of the estimated logistics infrastructure savings. According to the DSB estimates, the $6-billion savings represents an approximate 40-percent reduction in the $14 billion the Board estimated DOD spends annually for CONUS logistics activities. According to a DSB task force member, estimates for the cost of installation supply and repair activities were unavailable. Therefore, the group used $14 billion as a rough estimate to approximate total CONUS logistics cost, not including activities already contracted out. Although we were unable to substantiate those numbers, the data that is available indicates that DSB’s estimate of $14 billion for CONUS logistics costs is conservative. For example, the Navy has reported that more than $8.5 billion of Navy resources was applied in fiscal year 1996 to maintenance programs in support of fleet ships and aircraft. The report also stated that to gain economies and achieve significant savings, DOD needs to consider dramatic changes in the way it does business. DSB said the Department must get out of the material management/distribution and repair business by expanding contractor logistics support to all fielded weapon systems and by expanding the use of “prime vendors” for all commodities. Contractor logistics support, which relies on a contractor to provide long-term, total life-cycle logistics support, combines depot-level maintenance with wholesale and selected retail material management functions. Under the “prime vendor” concept, DOD would rely on a single vendor to buy, warehouse, and distribute inventory to the customer as needed, thus removing the Defense Logistics Agency and the services from their present middleman role. Our reviews of best practices within the private sector and ongoing work at DOD indicate that DOD has significant opportunities for reducing logistics costs and improving performance by changing its business processes. This work also indicates that determining the most cost-effective processes to use requires an evaluation of costs and benefits of each situation. These findings are consistent with the general theme of the DSB’s reports that opportunities exist for savings in the operation of DOD’s logistics support activities. However, DSB focused on outsourcing, while our work has focused first on reengineering and streamlining, and outsourcing where appropriate and more cost-effective. Over the past several years, DOD has considered a number of actions to improve the efficiency and effectiveness of its logistics system. As with the private sector, such actions should include using highly accurate information systems, consolidating certain activities, employing various process streamlining methods, and outsourcing. For example, defense maintenance depots have about 40-percent excess capacity, and we have advocated consolidating workloads to take advantage of economies of scale and eliminate unnecessary duplication. Consolidating workloads from two closing depots would allow the Air Force, for instance, to achieve annual savings of over $200 million and reduce its excess capacity from 45 percent to about 8 percent. In addition, our work has pointed out the benefits of outsourcing when careful economic analysis indicates the private sector can provide required support at less cost than a DOD activity can. For example, the Defense Logistics Agency has successfully taken steps to use prime vendors to supply personnel items directly to military facilities. The consumable items under these vendor programs account for 2 percent of the consumable items DOD manages. DOD’s prime vendor program for medical supplies, along with other inventory reduction efforts, has resulted in savings that we estimate exceed $700 million. More importantly, this program has moved DOD out of the inventory storage and distribution function for these supplies, thus emptying warehouses, eliminating unnecessary layers of inventory, and reducing the overall size of the DOD supply system. Also, service is improved because DOD buys only the items that are currently needed and consumers can order and receive inventory within hours of the time the items are used. While DOD has achieved benefits from outsourcing, it has been shown that adequate competition has been key to achieving significant reductions. Public-private competition studies by CNA have stressed this point. In its 1993 review of the Navy’s Commercial Activities Program, CNA noted that about half the competitions were won by the in-house team and that when competitions with no savings were excluded, the savings from contracts awarded to the public sector were 50 percent and those to the private sector were 40 percent. CNA officials concluded that because of competition both sectors were spurred to increase efficiency and reduce costs and DOD achieved greater savings. CNA also concluded that savings would have been less had the public sector been excluded from competition. Likewise, our review of DOD’s public-private competition program for depot maintenance determined that such competitions resulted in reduced costs. Facing increasing pressures to maintain market competitiveness, private companies have been reevaluating their organization and processes to cut costs and improve customer service. The most successful improvements include (1) using highly accurate information systems that provide cost, tracking, and control data; (2) consolidating and/or centralizing certain activities; (3) employing various methods to streamline work processes; and (4) shifting certain activities to third-party providers. Each company’s overall business strategy and assessment of “core competencies” guide which tools to use and how to use them. Private companies use a variety of approaches to meet their logistics support needs. For example, Southwest Airlines contracts out almost all maintenance, thus avoiding costly investments in facilities, personnel, and inventory. However, in contrast, having already made a significant investment in building infrastructure and training personnel, British Airways reached a different decision about its support operations. While it has sold off and/or outsourced some activities (namely engine repair and parts supply) and improved remaining in-house repair operations, the airline now has become a third-party supplier of aircraft overhaul. Whether the organization decides to consolidate, reengineer, or outsource activities, or to do some combination thereof, the private firms and consultants with whom we met stressed that identifying and understanding the organization’s core activities and obtaining accurate cost data for all in-house operations are critical to making informed business decisions and assessing overall performance. Core activities are those that are essential for meeting an organization’s mission. Before making decisions on what cost-saving options should be used, an organization should develop a performance-based, risk-adjusted analysis of benefits and costs for each option to provide (1) the foundation for comparing the baseline benefits and costs with proposed options and (2) a basis for decisionmakers to use in selecting a feasible option that meets performance goals. The organization should also factor into the analysis the barriers and risks in implementing the options. Thus, the best practice would be to make an outsourcing decision only after a core assessment and comprehensive cost-benefit analysis have been performed rather than to take a blanket approach and outsource everything in a certain area. PA&E’s analysis of the DSB’s estimated $6 billion in annual logistics savings found that the estimate was overstated by about $1 billion and that another $3 billion in projected savings would be difficult to achieve or unlikely to be achieved. According to PA&E officials, DSB’s $6-billion savings estimate was overstated by about $1 billion because contract administration and oversight costs were understated and one-time inventory savings (spread over 6 years) was claimed as steady state savings. Further, in assessing the degree of difficulty in achieving the savings, PA&E concluded that about $1 billion would be difficult to achieve, but was possible if Congress changed the required 60/40 public-private split to 50-50, which has since occurred. PA&E also believed that another $2 billion was unlikely to be saved primarily because of timing and DOD’s culture. It did not believe that DOD could carry out the proposals within the DSB’s 6-year schedule, if at all. PA&E’s assessment concluded that the remaining $2 billion of the DSB’s $6-billion savings estimate was achievable or already identified in DOD’s future year defense program. PA&E officials defined as achievable those savings that they believed could be realized given DSB’s 25-percent savings assumption and the then-current legal restrictions on outsourcing depot maintenance activities. About $0.2 billion in savings would involve maximizing the use of outsourcing under legislative constraints as they existed at that time, such as the 60/40 rule. The remainder of the achievable savings have already been identified in DOD’s future year defense program. Table 2 shows PA&E’s revised estimate of the DSB’s logistics savings. Our analysis confirms PA&E’s conclusion that the DSB’s logistics savings estimates are not well supported and are unlikely to be as large as estimated. Specifically, we found that (1) the Board’s projected annual savings from reliability improvements are overstated by over $1 billion; (2) the DSB’s 25-percent savings rate from outsourcing appears to be overly optimistic; and (3) DSB, while recognizing it would be difficult to do so, assumed that DOD would overcome impediments that prevent the outsourcing of all logistics functions. We do not know by how much or whether these questions would change the $2 billion in savings that PA&E concluded were achievable. In addition to overstating inventory management savings noted by PA&E, the DSB task force overstated its estimate of annual savings from equipment reliability improvements. The Board’s estimate of $1.5 billion in annual savings by year 2002 (6 years from the year of DSB’s study) is overstated by at least $1.2 billion. DSB based its estimate on a Logistics Management Institute (LMI) study that assessed the reductions of operation and support costs that result from improved reliability and maintainability due to technological advancements. Such advancements may include using improved materials and fewer component parts; thus reducing the number of spare purchases and the need for scheduled and unscheduled maintenance. Accomplishing these advancements requires an investment that must be evaluated in light of the expected return on investment. For its study, LMI assumed an aggressive technology improvement program. For example, it assumed a 9 to 1 return on investment that would accrue over 20 years, with savings starting the second year. Further, it assumed that any given investment would generate a savings stream for at least 10 years. Based on these assumptions and its analysis, LMI concluded that with an annual investment starting at $100 million and leveling at $500 million within 5 years, DOD could achieve $300 million in savings in the sixth year. DOD would not achieve the $1.5-billion savings that DSB included in its savings estimate until the fourteenth year. Thus, even without questioning LMI’s aggressive assumptions, the DSB’s savings estimate is overstated by at least $1.2 billion. DSB assumed that outsourcing all logistics activities would reduce DOD’s logistics costs by 25 percent. The Board based this projection on public-private competition studies, industry studies by such companies as Caterpillar and Boeing, and anecdotal evidence. While we believe that savings can be achieved through appropriate outsourcing, these savings are a result of competition rather than from outsourcing itself. The studies DSB cited were primarily for commercial activities—such as base operations, real property maintenance, and food service. As we have reported, these activities generally have highly competitive markets. For some logistics activities, such as nonship depot maintenance, our recent work has shown that competitive markets do not currently exist. To the extent that competitive markets do not exist, the amount of savings that can be generated through outsourcing may be reduced. As we reported in 1996, 76 percent of the 240 open depot maintenance contracts we examined were awarded noncompetitively (i.e., sole source). More recently, we reported that the percentage of noncompetitive depot maintenance contracts had increased for activities other than shipyards. For the three services, about 91 percent of the 15,346 new depot maintenance contracts awarded from the beginning of fiscal year 1996 to date were sole source. Moreover, the DSB recommended contractor logistics support arrangements for new and modified weapon systems. Our past work demonstrates that most contractor logistics support depot work is sole sourced to the original equipment manufacturer, raising cost and future competition concerns. Furthermore, eliminating the public sector from competition, as advocated by DSB, could further decrease savings. In developing its savings estimates for CONUS logistics, DSB assumed that DOD would outsource all logistics activity. However, certain barriers, including legal and cultural impediments, must be overcome to fully implement DSB’s recommendations. While it may be possible to implement DSB’s recommendations, in some cases, implementation may require congressional action, and in others, implementation may take substantially longer than DSB’s 6-year estimate. We did not quantify how much these impediments will reduce DSB’s savings, but consistent with PA&E’s analysis, these factors will mitigate portions of the projected savings. Although it recommended that essentially all logistics—including material management and depot maintenance, distribution, and other activities—be outsourced, DSB recognized that outsourcing is limited or precluded by various laws and regulations. For example, fundamental to determining whether or not to outsource is the identification of core functions and activities. Section 2464 of title 10 U.S.C. states that DOD activities should maintain the government-owned and government-operated core logistics capability necessary to maintain and repair weapon systems and other military equipment needed to fulfill national strategic and contingency plans. The delineation of core activities has historically proven to be extremely difficult. For example, proponents of increased privatization have questioned the justification for retaining many support activities as core and have recommended revising the core logistics requirement. Section 311 of the 1996 DOD Authorization Act directed the Secretary of Defense to develop a comprehensive depot maintenance policy, including a definition of DOD’s required core depot maintenance capability. While DOD has identified a process for determining core depot maintenance capability requirements, it has not completed its evaluation. Moreover, DOD has not developed a process for identifying core requirements for other logistics functions and activities. Thus, core requirements in these areas are also unknown. The 1998 DOD Authorization Act again requires that the Department identify its core depot maintenance requirements, this time under the new provisions described above. Additionally, 10 U.S.C. 2466 states that no more than 50 percent of the depot maintenance funds made available in a given fiscal year may be spent for depot maintenance conducted by nonfederal personnel. This provision, along with other relevant provisions significantly affects DSB’s savings estimate because about 50 percent of depot maintenance would not be subject to outsourcing. Section 2469 of title 10 states that DOD-performed depot maintenance and repair workloads valued at not less than $3 million cannot be changed to contractor-performed work without using competitive procedures that include both public and private entities. This requirement for public-private competition affects the DSB savings estimate because DSB assumed the requirement would be eliminated. The 1998 DOD Authorization Act also added a new section 2469a to title 10 that affects public-private competitions for certain workloads from closed or realigned installations. Further, during the congressional deliberation on the 1997 DOD Authorization Act, DOD provided Congress a list of statutory encumbrances to outsourcing, including 10 U.S.C. 2461, which requires studies and reports before converting public workloads to a contractor; 10 U.S.C. 2465, which prohibits contracts for performance of fire-fighting and security guard functions; section 317 of the National Defense Authorization Act for Fiscal Year 1987 (P.L. 99-661), which prohibits the Secretary of Defense from contracting for the functions performed at Crane Army Ammunition Activity or McAllister Army Ammunition Plant; 10 U.S.C. 4532, which requires the Army to have supplies made by factories and arsenals if they can do so economically; and 10 U.S.C. 2305 (a)(1), which specifies that in preparing for the procurement of property or services, the Secretary of Defense shall specify the agency’s needs and solicit bids or proposals in a manner designed to achieve full and open competition. DOD officials have repeatedly recognized the importance of using resources for the highest priority operational and investment needs rather than maintaining unneeded property, facilities, and overhead. However, DOD has found that infrastructure reductions, whether through outsourcing or some other means, are difficult and painful because achieving significant cost savings may require up-front investments, the closure of installations, and the elimination of military and civilian jobs. In addition, according to DOD officials, the military services fear that savings achieved from outsourcing would be diverted to support other DOD requirements and may not be available to the outsourcing organization to fund service needs. DSB recognized DOD’s cultural resistance to outsourcing logistics activities and said that overcoming resistance may take some time. DOD has a tradition of remarkable military achievement but it also has an entrenched culture that resists dramatic changes from well-established patterns of behavior. In 1992, we reported that academic experts and business executives generally agreed that a culture change is a long-term effort that takes at least 5 to 10 years to complete. Although a change in DOD’s management culture is underway, continual support of its top managers is critical to successful completion of cultural change. We agree with DSB that there are many opportunities for significant reductions in logistics infrastructure costs. However, the Board’s projected savings are overly optimistic. Further, savings opportunities from consolidating and reengineering must be considered in addition to outsourcing. Even though the Board recognized that there are impediments to outsourcing, PA&E’s and our analyses show that because of such impediments, not all logistics activities can be outsourced. This is particularly true for the legislative barriers—principally, the legislated workload mix between the public and private sectors. Moreover, PA&E’s and our analyses show estimating errors of about $1 billion for contract administration and inventory reductions and another $1 billion for reliability improvements. These combined adjustments will further reduce the Board’s projected savings by another 30 percent. Notwithstanding the problems with DSB’s estimates, DOD’s effort to reduce costs and achieve savings is extremely important, and we encourage DOD to move forward as quickly as possible to develop a realistic and achievable cost-reduction program. As discussed in our high-risk infrastructure report, breaking down cultural resistance to change, overcoming service parochialism, and setting forth a clear framework for a reduced defense infrastructure are key to effectively implementing savings. To aid in achieving the most savings possible, we recommend that the Secretary of Defense require the development of a detailed implementation plan for improving the efficiency and effectiveness of DOD’s logistics infrastructure, including reengineering, consolidating, outsourcing logistics activities where appropriate, and reducing excess infrastructure. We recommend that the plan establish time frames for identifying and evaluating alternative support options and implementing the most cost-effective solutions and identify required resources, including personnel and funding, for accomplishing the cost-reduction initiatives. We also recommend that DOD present the plan to Congress in much the same way it presented its force structure reductions in the Base Force Plan and the bottom-up review. This would provide Congress a basis to oversee DOD’s plan and would allow the affected parties to see what is going to happen and when. In commenting on a draft of this report (see app. II), DOD said that DSB had considered legal barriers to outsourcing and had expressly sought to identify the savings that could result if they were lifted. As noted in the report, we believe it is unlikely that the legal barriers cited would be lifted within the time frame DSB envisioned. DOD said that actions consistent with our recommendation were underway and there was no need for the recommended plan. Specifically, DOD said that the Secretary of Defense was preparing a more detailed plan for implementing the strategy formulated by QDR. Subsequently, on November 12, 1997, the Secretary of Defense announced the publication of the Defense Reform Initiative Report. This report contained the results of the task force on defense reform established as a result of QDR. The task force, which was charged with identifying ways to improve DOD’s organization and procedures, defined a series of initiatives in four major areas: reengineering, by adopting modern business practices to achieve world-class standards of performance; consolidating, by streamlining organizations to remove redundancy and competing, by applying market mechanisms to improve quality, reduce costs, and respond to customer needs; and eliminating infrastructure, by reducing excess support structure to free resources and focus on competencies. This report is a step in the right direction and sets forth certain strategic goals and direction. However, the intent of our recommendation was that a detailed implementation plan be developed, and we have modified our final recommendations accordingly. Our scope and methodology are provided in appendix I. We are sending copies of this report to interested congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Director of the Office of Management and Budget; and interested congressional committees. Copies will be made available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were James Wiggins, Julia Denman, Hilary Sullivan, and Jeffrey Knott. John Brosnan from our Office of General Counsel provided the legal review. The scope of our review was limited to reviewing the Defense Science Board’s (DSB) projected $6 billion annual savings for the continental United States (CONUS) logistics. To determine the basis of DSB’s savings estimate and recommendations, we reviewed the two DSB reports that made savings estimates based on outsourcing: Report of the Defense Science Board Task Force on Outsourcing and Privatization, August 28, 1996, and Report of the Defense Science Board 1996 Summer Study on Achieving an Innovative Support Structure for 21st Century Military Superiority: Higher Performance at Lower Costs, November 1996. We discussed the assumptions with task force members and reviewed supporting data that was available to us. We requested DSB task force minutes pertaining to these studies; however, we did not receive them in time to include them in our review. We reviewed the Center for Naval Analyses (CNA) studies of public-private competitions cited by DSB as well as CNA’s more recent studies and discussed those studies with CNA officials. A CNA official said that CNA analysts performed limited testing of the computer-generated data they had used in analyzing the results from the commercial activity competitions. He said that the data was reasonably accurate for the purposes of their studies. We did not independently verify the data used in CNA’s studies because we did not rely solely on CNA’s studies for our conclusions. To further evaluate DSB’s savings estimates and recommendations we (1) reviewed Program Analysis and Evaluation’s (PA&E) analysis and discussed that analysis and conclusions with PA&E officials and (2) reviewed the Logistics Management Institute’s (LMI) study, Using Technology to Reduce Cost of Ownership, Volume 1: Annotated Briefing (LG404RD4, April 1996), and discussed the studies’ assumptions and conclusions with LMI officials. In addition, we reviewed our past reports and testimony on depot maintenance, public-private competitions, and infrastructure reductions. To determine other infrastructure savings opportunities for the Department of Defense (DOD), we relied on our past reports and testimony on commercial “best practices,” public-private competitions, and depot maintenance. In addition, we also drew on ongoing work on outsourcing practices within the private sector. We performed our review at the following locations: Logistics Management Institute, Arlington, Va.; DOD’s Office of Maintenance Policy, Office of Program Analysis and Evaluation; and the Defense Science Board, Washington, D.C. We also had discussions with officials from the Center for Naval Analyses, Alexandria, Va. We conducted our review in July and August 1997, and, except where noted, in accordance with generally accepted government auditing standards. Air Force Depot Maintenance: Information on the Cost Effectiveness of B-1B and B-52 Support Options (GAO/NSIAD-97-210BR, Sept. 12, 1997). Navy Depot Maintenance: Privatizing the Louisville Operations in Place Is Not Cost Effective (GAO/NSIAD-97-52, July 31, 1997). Defense Depot Maintenance: Challenges Facing DOD in Managing Working Capital Funds (GAO/T-NSIAD/AIMD-97-152, May 7, 1997). Depot Maintenance: Uncertainties and Challenges DOD Faces in Restructuring Its Depot Maintenance Program (GAO/T-NSIAD-97-111, Mar. 18, 1997) and (GAO/T/NSIAD-112, Apr. 10, 1997). Defense Outsourcing: Challenges Facing DOD as It Attempts to Save Billions in Infrastructure Costs (GAO/T-NSIAD-97-110, Mar. 12, 1997). Navy Ordnance: Analysis of Business Area Price Increases and Financial Losses (GAO/AIMD/NSIAD-97-74, Mar. 14, 1997). High-Risk Series: Defense Infrastructure (GAO/HR-97-7, Feb. 1997). Air Force Depot Maintenance: Privatization-in-Place Plans Are Costly While Excess Capacity Exists (GAO/NSIAD-97-13, Dec. 31, 1996). Army Depot Maintenance: Privatization Without Further Downsizing Increases Costly Excess Capacity (GAO/NSIAD-96-201, Sept. 18, 1996). Navy Depot Maintenance: Cost and Savings Issues Related to Privatizing-in-Place the Louisville, Kentucky, Depot (GAO/NSIAD-96-202, Sept. 18, 1996). Defense Depot Maintenance: Commission on Roles and Mission’s Privatization Assumptions Are Questionable (GAO/NSIAD-96-161, July 15, 1996). Defense Depot Maintenance: DOD’s Policy Report Leaves Future Role of Depot System Uncertain (GAO/NSIAD-96-165, May 21, 1996). Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers (GAO/NSIAD-96-166, May 21, 1996). Defense Depot Maintenance: Privatization and the Debate Over the Public-Private Mix (GAO/T-NSIAD-96-146, Apr. 16, 1996) and (GAO/T-NSIAD-96-148, Apr. 17, 1996). Military Bases: Closure and Realignment Savings Are Significant, but Not Easily Quantified (GAO/NSIAD-96-67, Apr. 8, 1996). Depot Maintenance: Opportunities to Privatize Repair of Military Engines (GAO/NSIAD-96-33, Mar. 5, 1996). Closing Maintenance Depots: Savings, Personnel, and Workload Redistribution Issues (GAO/NSIAD-96-29, Mar. 4, 1996). Navy Maintenance: Assessment of the Public-Private Competition Program for Aviation Maintenance (GAO/NSIAD-96-30, Jan. 22, 1996). Depot Maintenance: The Navy’s Decision to Stop F/A-18 Repairs at Ogden Air Logistics Center (GAO/NSIAD-96-31, Dec. 15, 1995). Military Bases: Case Studies on Selected Bases Closed in 1988 and 1991 (GAO/NSIAD-95-139, Aug. 15, 1995). Military Base Closure: Analysis of DOD’s Process and Recommendations for 1995 (GAO/T-NSIAD-95-132, Apr. 17, 1995). Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment (GAO/NSIAD-95-133, Apr. 14, 1995). Aerospace Guidance and Metrology Center: Cost Growth and Other Factors Affect Closure and Privatization (GAO/NSIAD-95-60, Dec. 9, 1994). Navy Maintenance: Assessment of the Public and Private Shipyard Competition Program (GAO/NSIAD-94-184, May 25, 1994). Depot Maintenance: Issues in Allocating Workload Between the Public and Private Sectors (GAO/T-NSIAD-94-161, Apr. 12, 1994). Depot Maintenance (GAO/NSIAD-93-292R, Sept. 30, 1993). Depot Maintenance: Issues in Management and Restructuring to Support a Downsized Military (GAO/T-NSIAD-93-13, May 6, 1993). Air Logistics Center Indicators (GAO/NSIAD-93-146R, Feb. 25, 1993). Defense Force Management: Challenges Facing DOD as It Continues to Downsize Its Civilian Workforce (GAO/NSIAD-93-123, Feb. 12, 1993). Navy Maintenance: Public-Private Competition for F-14 Aircraft Maintenance (GAO/NSIAD-92-143, May 20, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the basis for the Defense Science Board's (DSB) estimate that the Department of Defense (DOD) could potentially save $6 billion annually by reducing its logistics infrastructure costs within the continental United States, focusing on: (1) the opportunities for logistics infrastructure savings; and (2) DOD's and GAO's analyses of the DSB's projected logistics infrastructure savings. GAO noted that: (1) GAO agrees with the DSB that DOD can reduce the costs of its logistics activities through outsourcing and other initiatives; (2) DOD has already achieved over $700 million in savings from the use of a prime vendor program and other inventory-related reduction efforts for defense medical supplies; (3) according to studies by the Center for Naval Analyses, competition for work, including competition between the public sector and the private sector--regardless of which one wins--can result in cost savings; (4) many private-sector firms have successfully used outsourcing to reduce their costs of operations; (5) the DOD Program Analysis and Evaluation (PA&E) directorate's analysis shows, however, that the DSB's estimated annual savings of $6 billion is overstated by about $4 billion because of errors in estimates, overly optimistic savings assumptions, and legal and cultural impediments; (6) according to PA&E's analysis, this $4 billion includes: (a) $1 billion in overstated contract administration and oversight savings and one-time inventory savings; and (b) $3 billion in savings that would be unlikely or would be difficult to achieve within the Board's 6-year time frame, given certain legislative requirements and DOD's resistance to outsourcing all logistics functions; (7) GAO's analysis confirmed PA&E's conclusion that the Board's estimated savings were overstated; (8) GAO's analysis also raised questions about the Board's projected savings, but GAO does not know by how much or whether these questions would change the $2 billion in savings that PA&E concluded were achievable; (9) GAO questioned whether DOD would achieve a 25-percent savings from outsourcing, as the Board assumed, because the savings were based primarily on studies of public-private competitions in highly competitive private-sector markets; (10) however, competitive markets may not exist in some areas; (11) notwithstanding GAO's concerns about the magnitude of savings, DOD can make significant reductions in logistics costs; (12) the Secretary of Defense recently issued a strategic plan for achieving such reductions; (13) this report is a step in the right direction; and (14) DOD now needs an implementation plan based on a realistic assessment of the savings potential of various cost-reduction alternatives and the time frames for accomplishing various activities required to identify and implement the most cost-effective solutions.
As shown in figure 1, resilience is a concept that has gained increasing attention for its potential to decrease disaster losses. FEMA, a component of the Department of Homeland Security (DHS), leads the federal effort to mitigate, respond to, and recover from disasters, both natural and man-made. Major disaster declarations can trigger a variety of federal response and recovery programs for government and nongovernmental entities, households, and individuals, including hazard mitigation programs intended to increase the nation’s disaster resilience. However, multiple federal agencies can play a role in rebuilding after a major disaster. For example, 19 agencies were appropriated funds for more than 60 programs for Hurricane Sandy recovery in the Sandy Supplemental, some of which provide opportunities to incorporate hazard mitigation and other disaster resilience-building activities into disaster recovery efforts. These programs include the (1) FEMA Hazard Mitigation Grant Program (HMGP), (2) FEMA Public Assistance (PA), (3) HUD Community Development Block Grant-Disaster Recovery (CDBG-DR), (4) the Department of Transportation’s Federal Transit Administration (FTA) Emergency Relief Program (ERP), and (5) USACE’s Sandy Program. See table 1 for a description of these key programs and how they help to support disaster resilience-building efforts. Because FEMA is the lead federal agency for emergency management, FEMA’s national-level strategies for recovery and hazard mitigation also highlight the importance of incorporating hazard mitigation and other disaster resilience activities into the recovery process. FEMA’s September 2011 NDRF recognizes resilient rebuilding as one of the keys to recovery success, stating that recovery is an opportunity for communities to rebuild in a manner that reduces or eliminates risk from future disasters.hazard mitigation breaks the cycle of damage-repair-damage resulting from rebuilding without hazard mitigation measures following disasters. Similarly, FEMA’s NMF states that linking recovery and The NMF issued in May 2013 addresses, in part, how the nation will develop, employ, and coordinate core hazard mitigation capabilities to reduce loss of life and property by lessening the impact of disasters. The NMF explains that building widespread disaster resilience throughout communities is a national priority and is a responsibility that is shared by individuals; businesses; non-profit organizations; and federal, state, local, tribal, and territorial governments. The NMF also established MitFLG to help coordinate hazard mitigation efforts of relevant local, state, tribal, and federal organizations. MitFLG is an intergovernmental coordinating body that was created to integrate federal efforts and promote a national culture shift that incorporates risk management and hazard mitigation in all planning, decision making, and development to the extent practicable. Although federal agencies play a critical role in promoting disaster resilience through the use of federal resources, a large part of disaster resilience-building efforts and decision-making also occurs at the state and local level. State and local laws and regulations can heavily influence disaster resilience efforts, for example, by strengthening building codes. In addition, state emergency management officials, such as State Hazard Mitigation Officers, play an important role by coordinating with local communities to enhance disaster resilience. States and localities have used funds appropriated to federal agencies by the Sandy Supplemental to plan and implement a variety of hazard mitigation activities, including but not limited to the following types of projects: acquiring and demolishing properties at risk for repeated flooding, elevating flood prone structures, erecting physical flood barriers such as seawalls and berms to protect protecting critical facilities against power loss. against coastal flooding, restoring or enhancing storm water management measures, restoring wetlands and coastal areas to control erosion, and Four federal agencies—DHS’s FEMA, HUD, DOT’s FTA, and USACE— administer five programs that funded the majority of these disaster resilience-building measures during the Hurricane Sandy recovery effort. These five programs are FEMA’s Hazard Mitigation Grant Program, FEMA’s Public Assistance, HUD’s Community Development Block Grant- Disaster Recovery, FTA’s Public Transportation Emergency Relief Program, and USACE’s Sandy Program. Designed specifically to ensure opportunities to reduce the risk of loss of life and property from future disasters are not lost during the reconstruction process, HMGP can fund a variety of long-term solutions, including but not limited to acquisition and demolition; elevation; and retrofitting to minimize damages from high winds, earthquake, flood, wildfire, or other natural hazards. FEMA requires that HMGP projects (1) advance the state's Hazard Mitigation Plan; (2) meet the environmental and historical requirements; and (3) be cost-effective, as determined by FEMA’s Benefit-Cost Analysis Tool or other FEMA-approved methodologies. In addition, HMGP projects must contribute to a long-term solution, meaning that temporary measures—such as sandbagging to protect against flooding—are not eligible. FEMA awards HMGP after a major disaster has been declared, and the total available amount for any given disaster is dependent on the sum of other FEMA disaster grants—generally it is 15 percent of the first $2 billion but may be higher under specific circumstances.notifies the states of how much funding they are eligible to receive, and the states working with FEMA then decide how to award the funds to localities and other applicants. Recipients of HMGP are usually responsible for 25 percent of the total project cost. HMGP may be used statewide—that is, it is not required to be used only in parts of the state that sustained disaster damage—as long as the state and local recipient of funds has a FEMA-approved hazard mitigation plan in place. As of May 2015, FEMA has awarded over $1.7 billion in HMGP funds from the Sandy Supplemental for damage from Hurricane Sandy. State officials we interviewed in all 13 Sandy-affected states reported using the HMGP they received as a result of Hurricane Sandy to enhance disaster resilience. These funds are being used for acquisition and demolition, home elevations, or the purchase of generators to protect critical facilities from future power loss, among other reasons. Figure 2 provides an example of how one state in the Sandy-affected area used HMGP funds to elevate homes. The Public Assistance program provides grants to states, local governments, federally recognized Indian tribes, and certain private non- profit entities to assist them with the response to and recovery from disasters. Specifically, the program provides assistance for debris removal, emergency protective measures, and permanent restoration of infrastructure, including funding hazard mitigation measures to reduce future risks in conjunction with repair of disaster- damaged facilities (under Stafford Act section 406) if cost-effectiveness can be demonstrated. The federal share of assistance is not less than 75 percent of the eligible cost for debris removal, emergency protective measures, and permanent restoration. The state grantee determines how the non-federal share (up to 25 percent) is split between the state and eligible applicants. There is no pre-set limit to the amount of Public Assistance funds a community may receive; however, Public Assistance hazard mitigation measures must be determined to be cost-effective. In addition, Public Assistance may fund measures that are not classified as “hazard mitigation measures” but nevertheless serve to prevent or reduce future damage. For example, in one state, Public Assistance was used to replace boat docks that had been damaged by Sandy with floating docks instead of the stationary docks that had previously been in place. Although this activity was not classified as hazard mitigation under Public Assistance guidelines, the state official expected the floating docks to be more resilient than stationary docks during future disasters. Sometimes, a combination of Public Assistance and HMGP funding may be appropriate. That is, Public Assistance hazard mitigation funding may be used to enhance the resilience of parts of the facility that were damaged and HMGP funding may be used to provide future protection to the undamaged parts of the facility. States can also receive funds through the Public Assistance program Alternative Procedures, under the authority of Stafford Act section 428, which provides flexibility and financial incentives, some of which can be used to enhance disaster resilience. For example, applicants using the Alternative Procedures Program may choose to combine multiple critical facilities of a state, tribal, or local government that were damaged by a disaster and rebuild them in a manner that makes them less likely to incur future disaster damages. For example, a community that had a fire and police station destroyed could combine the facilities, rebuild them in an area less prone to be affected by a future disaster, and enhance the construction of the facility to meet up-to-date building codes. As of March 2015, FEMA has awarded over $1.8 billion in total Public Assistance permanent work funds to the 13 Sandy-affected states, of which $400.6 million (22 percent) was approved to fund hazard mitigation activities. FEMA data shows that 11 of the 13 Sandy-affected states used PA funding for mitigation. Figure 3 provides an example of how Public Assistance funded hazard mitigation measures for a critical facility that was damaged by Sandy-related flooding. Congress appropriated $16.0 billion in the Sandy Supplemental to HUD’s Community Development Fund for disaster relief, long-term recovery, restoration of infrastructure and housing, and economic revitalization. This program is designed to address needs not met by other disaster recovery programs—including but not limited to disaster resilience initiatives—particularly for low and moderate income persons. The Sandy Supplemental directed these funds to be available for areas most impacted and distressed as a result of Presidentially-declared major disasters from 2011-2013. HUD allocated $930 million of the Sandy Supplemental appropriation to fund resilient recovery projects that resulted from Rebuild by Design—a competition sponsored under the authority of the America COMPETES Reauthorization Act of 2010—to promote innovative disaster resilience solutions in the Sandy-affected area that are compatible with local circumstances and then to fund selected solutions. awarded funds ranging from $10 million to $335 million to four jurisdictions in the Sandy-affected area that also received other CDBG- DR funding. Building on lessons learned from Rebuild by Design, HUD later announced that it would use $1 billion in CDBG-DR to fund a nationwide competition—the National Disaster Resilience Competition— with the aim of helping communities inside and outside the Sandy- affected area explore how they can recover from a past disaster and avoid future disaster losses. Applicants were required to link their proposals to the disaster from which they are recovering while demonstrating how they will reduce future risks and advance broader community development goals. Pub L No. 111-358, § 105, 124 Stat. 3989 (2011) (codified at 15 U.S.C. § 3719). funds to be at least 1 foot above the base flood elevation level. According to HUD officials, such efforts are classified according to the type of activity performed and the disaster resilience measures within each activity are not isolated and are therefore unable to be tracked separately. As a result, HUD cannot break down exactly how much of the remaining Sandy Supplemental appropriation that has been or will be used to help enhance disaster resilience, although some of the appropriation is being used for those purposes. New York City and the states of New York, New Jersey, Connecticut, Rhode Island, and Maryland, among others, received a CDBG-DR allocation that was not part of one of the resilience competitions. State officials we interviewed that reported receiving CDBG-DR said it served as a complement to FEMA-funded mitigation activities in two ways. First, CDBG-DR was used, in some cases, to cover all or part of the applicant’s share of HMGP and Public Assistance project costs if the project was determined to be CDBG eligible. Second, CDBG-DR funded some of the same type of mitigation activities that HMGP typically funds, such as acquisition and elevation of properties in high-risk areas, thereby increasing the number or scope of these projects states were able to offer. Figure 4 provides an example of an acquisition and demolition project undertaken by one Sandy-affected locality using 100 percent CDBG-DR funds to cover project costs. FTA received $10.9 billion under the Sandy Supplemental appropriation for the new Public Transportation ERP, which funds transit authority recovery, relief, and resilience projects and activities in areas affected by Hurricane Sandy. ERP is intended to provide operating assistance and capital funding to aid recipients and sub-recipients in restoring public transportation service, and in repairing and reconstructing public transportation assets as expeditiously as possible following an emergency or major disaster that affects a wide area. Eligible projects include emergency operations; emergency repairs; permanent repairs; actual engineering and construction costs on eligible projects; and resilience projects designed to protect equipment, facilities, and infrastructure from future damage. An initial FTA damage assessment in February 2013 estimated the costs of repairing facilities damaged by Hurricane Sandy in New York and New Jersey to be about $5.8 billion. As of May 2015, FTA has allocated $9.3 billion for recovery and resilience projects to public transportation agencies affected by Hurricane Sandy. According to FTA officials, the agency has obligated about $4.2 billion and disbursed about $938 million to reimburse transit agencies for emergency response, recovery, repair, and resilience costs. Generally, the federal cost share for FTA ERP projects is not to be more than 80 percent of the total project cost, and the federal cost share for competitive resilience projects is 75 percent of the total project cost. recovery expenses and approximately $4.9 billion for resilience projects, most of which were selected through a competitive grant process. Figure 5 provides an example of how ERP’s competitive resilience awards are expected to protect New York City transit from future damages. Funds are awarded to eligible agencies based on the demonstrated costs of responding to and recovering from an emergency or major disaster. Funds are also awarded to affected agencies for projects that improve the resiliency of public transportation assets and infrastructure to future emergencies or disasters. Language in the Sandy Supplemental charged USACE with reducing future flood risk in ways that will support the long-term sustainability of the coastal ecosystem and communities, and reduce the economic costs associated with large-scale flooding. It also mandated the North Atlantic Coast Comprehensive Study, which has the following goals: (1) reduce flood risk to vulnerable coastal populations, and (2) promote coastal resilient communities to ensure a sustainable and robust coastal landscape system. USACE released the North Atlantic Coast Comprehensive Study—covering more than 31,000 miles of coast line— in January 2015. More than 8 million people live in areas at risk of coastal flooding. Along the U.S. Atlantic coast alone, almost 60 percent of the land that is within a meter of sea level is planned for further development. According to USACE officials, the comprehensive nature of the study represents a significant improvement in planning to manage coastal flood risk. In addition to the study, USACE has dedicated funding and undertaken a number of coastal risk reduction projects and studies in five Sandy-affected states—Delaware, New Jersey, New York, Rhode Island, and Virginia. Figure 6 provides an example of one such study and associated projects designed to increase disaster resilience of communities at risk for flooding along one coastal system. The 16 groups of state and city officials from the Sandy-affected area that we interviewed reported successes in leveraging federal Sandy recovery efforts to enhance disaster resilience. However, in the interviews and in 13 follow-up survey responses, officials also reported a series of challenges that hindered their ability to maximize federal funds in the wake of recent disasters. Challenges generally fell into three categories: (1) specific challenges with the implementation of postdisaster programs where program implementation was not always consistent with agency disaster resilience priorities, (2) challenges from the broader structure of disaster resilience funding that limited a comprehensive approach to reducing overall risk, and (3) local challenges that are not directly in the federal purview but may be exacerbated by other challenges that are. Thirteen of the 16 groups of state and city officials we interviewed said that they were able to effectively or very effectively use the post-Sandy effort as an opportunity to make communities more resilient against future disasters. Officials cited residential acquisitions or elevations, the purchase of generators to ensure the continuity of operations of critical facilities, increased local hazard mitigation planning, and other projects such as those discussed previously as key efforts supported by federal funding in the Sandy recovery. In addition, officials from 12 of 16 states and cities said that their leaders value efforts to enhance community resilience to a great extent, as demonstrated by actions like the availability of state funding for mitigation, legislative efforts to strengthen building codes, or the establishment of state offices to focus on disaster resilience efforts. For example, New York City strengthened building codes to account for long-term sea level rise, and the Governor of Maryland issued an executive order that new construction and improvements of state structures must consider potential impacts of climate change. In addition, states that received a presidential major disaster declaration as a result of Hurricane Sandy collectively used more than 20 percent of the FEMA Public Assistance funds they received for disaster recovery and repair to implement hazard mitigation measures. One of the goals of the NDRF is to integrate hazard mitigation and risk reduction opportunities into all major decisions and reinvestments during the recovery process. Similarly, the National Mitigation Framework calls for governments at all levels to capitalize on opportunities during the recovery building process to further reduce vulnerability. However, state and city officials we interviewed and surveyed reported experiencing or perceiving several conditions that limited achievement of that goal with FEMA’s PA and HMGP programs, including (1) the complexities of the hazard mitigation planning process, (2) FEMA PA and HMGP staff turnover, (3) limitations on eligibility, and (4) lack of FEMA officials’ support for PA-funded hazard mitigation during project formulation. FEMA has a stated goal of integrating hazard mitigation into the recovery process to capitalize on opportunities to reduce future risk, but state officials’ experiences with recovery efforts from Hurricane Sandy and other disasters in the 2011-2013 timeframe suggest that, in some cases, implementation of these programs has not always been consistent with that goal. As shown in figure 7, 8 of 13 states and cities responding to our follow up survey reported that the complexity of FEMA’s review process for hazard mitigation plans limited their ability to maximize disaster resilience as part of the recovery. In interviews, officials said FEMA’s focus on detailed, nationally-standardized requirements during the review process for hazard mitigation plans often overshadowed the substance of the plans or, according to state officials, the plans’ capacity to meet local needs. FEMA requires that in order to be eligible for HMGP funding, both the state and local jurisdiction have a hazard mitigation plan that has been reviewed and approved by the agency. In order to be approved, a plan must document the planning process—including how it was prepared, who was involved, and how public comments were integrated—and must also include a comprehensive range of mitigation actions to address each hazard identified by the plan’s risk assessment, among other requirements. However, in 1 state, officials said that two of their localities had decided not to pursue FEMA-approved hazard mitigation plans— forgoing eligibility for HMGP as a result—because of the agency’s requirements and review comments for the plans. For example, the officials stated that among the reasons why FEMA returned the two plans were that they did not contain a definition of “hurricane,” did not follow the agency’s formatting guidelines, or because they included hazard mitigation activities that were not eligible for FEMA funding but may have been of benefit to the local jurisdictions. A senior FEMA official told us that they require the plans to be detailed so that they result in preidentified hazard mitigation projects that can be implemented quickly after a disaster. Ten of the 13 states and cities responding to our follow up survey reported that turnover among either PA or HMGP staff at joint field offices or recovery offices was a challenge that limited their ability to maximize disaster resilience as part of the recovery, as illustrated by figure 8.state official said that the high rate of turnover at FEMA causes discontinuity in grant staff expertise, which hinders the state’s ability to efficiently submit applications. Another said that the rate of turnover among project specialists for FEMA’s Public Assistance program was a frequent complaint of applicants and led to inconsistent guidance and repetition of project formulation processes. For example, 1 state reported that what had been acceptable to one FEMA reviewer may not be to the standards of the reviewer’s successor, requiring the state to go back to square one and re-visit everything that was previously agreed upon. The state reported that changes in FEMA personnel resulted in the need to “retread ground long-since covered” and resulted in inconsistent guidance One from FEMA personnel. FEMA officials acknowledged that staff turnover in joint field offices and recovery offices has been a long standing challenge. State and city officials in the Sandy-affected region that we interviewed and surveyed reported that they were not always able to capitalize on federal recovery assistance to strengthen resilience because the kind of projects they thought would be most useful were not eligible. Figure 9 shows that 7 of 13 states and cities responding to our follow-up survey reported that the type of projects eligible for HMGP limited their ability to maximize resilience during the recovery effort, and 8 of 13 said the same for PA. Concerns about eligibility generally centered on unique attributes of a locality or region that did not align with HMGP regulations and guidance designed for broad national characteristics. For example, one hazard mitigation official suggested that more needed to be done to address the needs of dense urban areas, including hazard mitigation projects suited to those environments. In particular, elevation is not feasible for historic row homes in these urban areas, and the property values in some areas may make it difficult to demonstrate cost- effectiveness of the acquisition of homes in flood prone areas. Although law, regulation, and grant guidance prescribe the types of projects eligible for these programs, there is sometimes flexibility in interpreting those criteria to be responsive to state and local needs. State Hazard Mitigation Officers we interviewed reported that they frequently communicate with their counterparts in other states to share information and ideas, and discovered inconsistent application of flexibility in making eligibility determinations across states and regions. As shown in figure 10, 7 of 13 states and cities responding to our survey said that FEMA officials in their region had not applied discretion, under the current regulations and guidance, in a way that maximizes hazard mitigation opportunities under the HMGP program. For example, in interviews officials from 1 state told us that their regional FEMA officials determined HMGP could be used only to elevate utilities, such as water heaters, to the first floor, because elevation was to be to base flood elevation plus 1 foot, and any other floor would be higher than that, and therefore not considered a “reasonable cost.” According to state officials, for practical and aesthetic reasons, homeowners declined to participate unless they could elevate utilities to the attic level. When we described this scenario to a senior official from the FEMA Mitigation Directorate, the official said that elevation of utilities to an attic generally could be determined eligible, and the small additional cost in that situation should not be a barrier to mitigating their risk from future disasters. The official later followed up and found that FEMA employees in that region had misunderstood FEMA’s authority to allow the additional cost for utilities. Another factor states reported affecting HMGP and PA eligibility is the benefit-cost calculation. For both HMGP and PA, FEMA typically requires a benefit cost analysis to compare the cost of a hazard mitigation project with its future benefits. Eight of 13 states and cities responding to our follow up survey reported that they experienced challenges with the consideration of appropriate benefits (e.g., environmental) in at least one of these programs, as shown in figure 11. For example, acquisition projects, which result in open space, can enhance environmental quality in a community. According to a senior official in FEMA’s Mitigation Directorate, the goal of HMGP is to protect against damages from future severe weather, and it would not be appropriate to consider benefits that do not relate directly to that purpose. However, FEMA has already taken some action that may help to address this problem. FEMA issued a policy in 2013 describing additional environmental benefits that could be considered for property acquisitions. In addition, on the basis of an analysis by the agency’s Risk Reduction Division, FEMA issued guidance in 2013 that the acquisition or elevation of structures located within the 100-year floodplain will be considered cost-effective as long as the total project costs are under $276,000 for acquisitions and $175,000 for elevations. For projects meeting these guidelines, applicants are not required to submit a benefit cost analysis. During our interviews, officials from multiple states praised this practice, in part because it reduced the burden for them and their local partners and in part because it recognized the overall benefit of elevation, even when some elevation projects may not have been determined to be cost-effective under the previous guidelines because of variations in construction costs across regions. Multiple state officials told us during interviews that they appreciated FEMA’s decision to establish a standard benefit cost analysis threshold for these hazard mitigation activities. On the other hand, some state officials cautioned that by making more projects eligible without a benefit cost analysis requirement, predetermined benefits can lead to more strain on limited resources and make the prioritization process more difficult for state decision-makers. Identifying PA Hazard Mitigation Projects Generally, when applicants wish to be considered for Public Assistance funding, they work directly with FEMA Project Specialists. Project Specialists are primarily responsible for collecting information about potential projects and for assessing and determining project eligibility. The formal documentation of this process is called a project worksheet. Project worksheet guidance states that project specialists should complete a hazard mitigation proposal if the applicant requests hazard mitigation measures, and in such cases, a justification for the proposed mitigation measures must be provided. Specialists had avoided writing or had inadequately prepared hazard mitigation proposals prior to Hurricane Sandy. In addition, a seventh respondent reported that the state did not experience Project Specialists actively discouraging hazard mitigation, but not encouraging it when working with locals to identify projects had been a challenge. FEMA’s Project Worksheet Development Guide, the internal guidance for completing project worksheets, directs project specialists to ask applicants if they would like to pursue hazard mitigation activities but does not direct the employees to actively identify opportunities for hazard mitigation during the process of scoping a project. On the contrary, the guidance categorizes hazard mitigation within a class of “special considerations,” which can result in additional processes or layers of review. Although the guidance notes that hazard mitigation is a priority for FEMA and suggests that project specialists should sometimes consider providing an explanation when it is not implemented, the agency’s position to make hazard mitigation a priority is not fully emphasized throughout the guidance. Federal Emergency Management Agency’s (FEMA) Hazard Mitigation Assistance FEMA has multiple programs to help states and localities enhance disaster resilience. The primary programs for mitigation against all hazards are Hazard Mitigation Grant Program (HMGP) and the Pre-Disaster Mitigation Program (PDM). In addition, the Flood Mitigation Assistance Program is available through the National Flood Insurance Fund for flood hazard mitigation projects. Generally, PDM receives annual appropriations that are small, compared with the amount of funding typically provided following a disaster. In fiscal year 2014, for example, the total appropriation for the entire nation was $25 million. Each of the 50 states, the District of Columbia, and U.S. territories automatically receives at least either 1 percent of PDM allocations or $575,000, whichever is less. Each of these entities is eligible for additional PDM awards on a competitive project basis, regardless of disaster history. HMGP, by contrast, is funded through FEMA’s Disaster Relief Fund, and is available only in the wake of a presidentially-declared disaster—generally 15 percent of the first $2 billion in disaster assistance awarded in association with a declared disaster. As a result, nationwide annual HMGP spending tends to be significantly higher than annual PDM allocations, but the size of any individual award will vary. In addition, in any given year, some states and other nonfederal governments may not be eligible for HMGP. is not documented in FEMA’s current internal guidance for completing project worksheets. In addition, FEMA officials told us that making adjustments to project specialists’ roles in identifying hazard mitigation opportunities during the project worksheet formulation process could help to further integrate disaster resilience into the recovery process. Given the challenges state and local officials experienced during the Hurricane Sandy recovery, evaluating the extent that corrective actions are needed to help ensure FEMA consistently reinforces its resilience goals in the NDRF and NMF could better position FEMA to assist state and locals in maximizing opportunities to enhance disaster resilience. According to FEMA officials, the agency has launched a reengineering initiative to develop a new operating model for PA that is intended to enable greater efficiency and improve the delivery of disaster assistance. FEMA plans to test the new model during 2015 and then begin full implementation during 2016 or 2017, depending on how many disasters occur during that time. In addition, according to officials, FEMA is exploring the effect of and potential solutions to staff turnover as part of this effort. Whether as part of the PA review or outside of it, identifying corrective actions that respond to the experiences of state officials with responsibility for resilience during recovery from the most recent multi- billion dollar disaster could enhance FEMA’s ability to meet the goal of integrating hazard mitigation into the recovery process. For example, FEMA could enhance communication, guidance, training, or documentation of decision making to address issues arising from staff turnover, promote maximum flexibility within the law to meet local needs, and ensure that PA project specialists appropriately identify hazard mitigation opportunities. FEMA officials acknowledged the importance of reviewing the challenges identified by state and local officials and told us that they appreciated us bringing these challenges—which they have not necessarily already planned to address in their review—to their attention. The bulk of federal disaster resilience funding, such as PA and HMGP, that is provided to states and localities comes after they have experienced a disaster, particularly a large or catastrophic disaster. Although there are advantages to focusing on disaster resilience in the postdisaster environment, our interviews and follow-up surveys revealed that the emphasis on spending in the postdisaster environment and the inherent fragmentation of federal funds and programs in the post- catastrophe environment limited states’ ability to plan and prioritize for maximum risk reduction. Except when supplemental funding is approved following a catastrophic disaster, PDM and HMGP are the primary federal programs that provide funding to states and localities to help enhance their disaster resilience— PDM for pre-disaster mitigation and HMGP in the postdisaster environment. As demonstrated in figure 13, PDM spending has historically been a fraction of HMGP spending. In addition, PDM grants limit states to a certain number of applications per year—for instance, in fiscal year 2014, states could submit a maximum of 11 applications, of which only 2 could be for projects, as opposed to hazard mitigation planning or management costs, which according to officials, limits the states’ capacities to implement “brick and mortar” hazard mitigation projects with the pre-disaster grant funds. As demonstrated by the Sandy Supplemental and the associated recovery effort, in the wake of a catastrophic disaster, affected areas can receive substantial sums to enhance disaster resilience. For example, HMGP provided approximately $1.7 billion to New York and New Jersey following Hurricane Sandy. In addition, other programs like CDBG-DR and FTA’s ERP provided billions of disaster resilience dollars that are not available on an annual basis. There are advantages to making funds available in the postdisaster environment. For example, the recent and tangible experience with the disaster can help motivate individuals and communities to focus on mitigating their risk, because they do not want to relive the losses they have just experienced or to incur losses they observe in their neighbors’ experience. State officials we interviewed confirmed that, in their experience, local applicants were more likely to invest their own resources in hazard mitigation activities following a disaster. In some states, state officials reported Hurricane Sandy was a catalyst to strengthen the state’s culture of resilience. Officials in the most severely affected states—for example, New York and New Jersey—told us that disaster resilience is now a point of discussion across sectors and throughout communities that had not previously pursued hazard mitigation. Of the six officials who said their state’s culture of resilience had not changed as a result of Sandy, three attributed the lack of change to the less severe impact experienced by their states, relative to the impact on other states that had been affected, and one official reported that seeing the damage sustained in New York and New Jersey made citizens and leaders more aware of their own risk. Another official stated that the culture of resilience in his state had already changed prior to Sandy, following two disasters that significantly affected business and employment interests. Although 9 of 16 groups of state officials we interviewed said that disaster resilience and hazard mitigation activities should be integrated into recovery efforts within the first hours and days after a disaster occurs, 3 said that actually happened in the Hurricane Sandy recovery. Although all of the officials said that disaster resilience and hazard mitigation activities should be incorporated within the initial hours to initial weeks, 5 of the groups we interviewed said that it was months to years before that happened in the wake of Hurricane Sandy because, in part, they were focused on more immediate recovery concerns such as restoring power. In some states, officials with primary responsibility for hazard mitigation noted that they wore other hats in the emergency operations center in the initial hours to days and were too focused on response functions to think about hazard mitigation. Of the 7 that said hazard mitigation should be integrated in days to weeks (rather than hours to days) some also expressed skepticism about the feasibility of focusing on future disaster resilience activities while life-saving activities were in progress. For example, one State Hazard Mitigation Officer told us that it is more important to restore power to the affected area quickly than it is to ensure that the power grid is repaired in a manner that mitigates future disaster risk. During our discussions about how soon after a disaster hazard mitigation and disaster resilience planning should be integrated into recovery, officials noted that a more effective approach to disaster resilience would be to plan and implement hazard mitigation before a disaster occurs. In this regard, 12 of 13 states and cities responding to our survey reported that the emphasis of federal resources on the postdisaster environment challenged their ability to maximize federal disaster resilience investments, as illustrated in figure 14. A related challenge that state and city officials we interviewed discussed stems from the general structure of HMGP funding. Although, outside the recent response to Hurricane Sandy, HMGP is generally the primary vehicle through which the federal government has invested in disaster resilience, state and local officials noted that the (1) amounts, (2) political context, and (3) timing and uncertainty associated with the program can lead to a less coherent approach to reducing overall risk. In terms of amounts, a senior FEMA official said that when HMGP is awarded for most disasters, the total award is generally not enough to address larger critical infrastructure needs, and as a result, states and localities tend to focus on smaller projects to the exclusion of those that have more potential to reduce their most critical risks. One state official said that localities tend to avoid including those larger needs in hazard mitigation planning, because they did not even think it was feasible to consider addressing them. State officials also described a delicate political environment in the wake of disasters where decisions about what hazard mitigation projects to fund can be challenging. A senior MitFLG official stated that political pressure can often dictate how and where states and localities spend resilience funding in the wake of a disaster. For example, elected officials can direct the use of disaster resilience funding to one or a few large- scale infrastructure projects or spread the funds throughout the state for numerous small projects across multiple communities. The official said it has been his experience that the state officials often choose to distribute the funds throughout multiple communities in a way that makes a positive impact on individuals and sometimes communities, but not in a way that necessarily changes the overall risk profile of the state. Officials in one state we visited described an example where they had initially planned to use HMGP funds for flood control measures in the economic center of a small town that regularly floods—a project they had determined was the best path to enhancing the state’s overall disaster resilience. However, in the wake of the disaster, political considerations trumped their experience and professional judgment, and the funds were used instead to elevate beachfront properties. Estimating Hazard Mitigation Grant Program Awards The Federal Emergency Management Agency’s (FEMA) Office of the Chief Financial Officer (OCFO) calculates Hazard Mitigation Grant Program (HMGP) allocations using estimates from FEMA program managers, which reflect expended and projected costs. Pursuant to FEMA’s Hazard Mitigation Assistance Guidance, the OCFO is to provide preliminary estimates to states at the 35-day and 6-month marks post-declaration. At 12 months post-declaration, the OCFO is to provide recipients with a “lock-in” figure, which is the maximum that FEMA can obligate to the state for eligible HMGP activities. Finally, in terms of timing and uncertainty, officials reported being challenged by the manner in which FEMA estimates and finalizes HMGP awards. Because HMGP, by statute, is awarded as a portion of all other FEMA disaster assistance awarded in association with a given disaster, there is lag and uncertainty in the process of estimating and finalizing awards. States and localities typically receive the final estimate for HMGP awards 12 months after a major disaster declaration, which coincides with the deadline for states to submit their HMGP project applications. Moreover, in the wake of catastrophic disasters like Hurricane Sandy, FEMA’s Office of the Chief Financial Officer is not always able to provide “lock-in” figures at the 12-month mark. FEMA officials stated that in catastrophic disasters, such as Hurricane Sandy, a prolonged focus on projects such as debris removal, emergency protective measures, and providing survivor assistance can delay their capacity to provide estimates of the amount of HMGP funding that is available. In addition, when states are approved for additional PA funding after the 12-month mark, states can request adjustments to the amount of HMGP funding available. Although states can get approved extensions—something that has happened in 75 percent of recent disasters, according to FEMA officials— state officials still reported challenges with the timing of final estimates. In interviews, state officials said they experienced delays in receiving their final estimates after Hurricane Sandy, and one state reported that it did not receive the final estimate until the summer of 2014. Shown in figure 15, 11 of 13 officials responding to our follow-up survey reported that they experienced challenges in the ability to plan, develop, or prioritize HMGP project applications prior to knowing how much funding they are going to receive for HMGP projects. A senior FEMA official told us that they encourage states to submit HMGP applications early, prior to receiving their lock-in estimates, because localities should have already identified their priority projects through the hazard mitigation planning process. However, state officials told us that they seek to optimize the projects based on the funds available, which could mean the difference between allocating all the funds to one larger project or deferring that project and allocating to multiple smaller projects depending on the total amount, and this optimization cannot occur when the amounts are unknown before applications are due. In addition, 1 official reported that projects identified in local hazard mitigation plans are generally not developed to the level required for an HMGP application, because few communities have the resources to dedicate to this effort without knowing whether the project will ultimately be funded. When disaster resilience funds, such as in the Sandy Supplemental, are appropriated in response to catastrophic events, multiple federal departments and programs share responsibility for enhancing disaster resilience during recovery efforts, and the risk of fragmentation across the multiple funding streams increases. As a result of the postdisaster response, different programs are initiated at different points in the wake of the disaster, making it more difficult for state and local officials to plan to use federal funds in a way that comprehensively addresses overall risk reduction. For example, states become eligible for PA once the President grants the state a major disaster declaration, but HUD CDBG-DR funds become available only if Congress makes a special appropriation as a result of a catastrophic disaster, such as Hurricane Sandy. More specifically, following Hurricane Sandy, FEMA Public Assistance funding became available to most states within days to weeks, while HUD CDBG- DR funding was not available for several months, because the Sandy Supplemental was enacted 3 months after the storm occurred. In addition, other funds that can be used for disaster resilience—related construction, such as FTA’s Emergency Relief Program, were appropriated in the Sandy Supplemental and could be obligated only after FTA was able to publish regulations governing the use of the funds because it was standing up a newly created program. In response to our follow up survey, 12 of 13 states and cities reported that navigating the multiple funding streams and various regulations is a challenge that affected their ability to maximize disaster resilience opportunities, shown in figure 16. State officials we interviewed said there is no focal point in their state with the time, responsibility, and authority to ensure a holistic approach to reducing risk and increasing disaster resilience. Although state hazard mitigation plans are to identify funding sources to pursue disaster resilience, we found variation in the extent to which these plans actively identified multiple funding streams. In addition, especially in the wake of a large disaster like Sandy, State Hazard Mitigation Officers do not always have visibility over all federal funding streams available for hazard mitigation. For example, all of the 13 State Hazard Mitigation Officers we interviewed said that they had little or no involvement with coordinating hazard mitigation activities with FTA’s ERP and most had minimal visibility over CDBG-DR disaster resilience-related projects, apart from the program’s ability to be used to cover the required applicant’s share of HMGP and PA projects. Figure 17 illustrates the multiple timeframes and program regulations that confronted state officials in the wake of Hurricane Sandy. With the multiple rules, regulations, and timelines, state officials responsible for enhancing disaster resilience who we interviewed reported that it is difficult to navigate and leverage the multiple programs available during recovery efforts. As illustrated by figure 18, 11 of the 13 states and cities that responded to our survey reported that the timeliness, availability, or usefulness of the federal government’s guidance about what type of federal assistance is available after a disaster, and how it can be used to most effectively pursue disaster resilience, was a challenge that reduced their state’s ability to maximize resilience opportunities. For example, one state official who responded to our survey said that key stakeholders, including state and local officials and representatives from other federal disaster recovery programs, were not adequately represented in discussions that occur at FEMA joint field offices and disaster recovery centers, which are often the state’s focal point for guidance during the recovery effort. Other officials stated that the available guidance could be variously incomplete, overwhelming, contradictory, or require numerous clarifications. For example, officials in 1 state interpreted guidance to mean that moving critical infrastructure out of the floodplain was an activity eligible for PA funding, but they were later told by FEMA officials that the guidance did not apply to their specific circumstance. Officials in another state said that agency guidance sometimes seemed inconsistent with applicable portions of the Code of Federal Regulations. The multiple federal regulations, if they are not harmonized, can also create inefficiency or the appearance of inefficiency for the states and localities. For example, officials from 10 of the 13 states and cities cited challenges due to inefficiencies in the implementation of environmental planning and historic preservation (EHP) reviews that prolonged work on projects, shown in figure 19. EHP reviews are required for many disaster recovery projects that receive federal funding, because of requirements that the agencies comply with certain federal environmental protection laws including the National Environmental Policy Act. State officials told us that these reviews were often time-and resource- consuming, which could dissuade individuals from pursuing hazard mitigation projects. For example, one state official state said that managers of a marina damaged by Sandy chose not to pursue PA funding because of concerns that the required EHP review would delay the project’s completion and potentially prevent the marina from reopening in time for the following season. In addition, officials said that FEMA’s EHP reviews were sometimes redundant, with similar reviews required by other federal or state agencies. For example, both FEMA and HUD require an EHP review, which in some cases could result in a duplication of requirements. SRIA amended the Stafford Act and required the President to establish an expedited and unified interagency review process to ensure compliance with environmental and historic preservation requirements under federal law relating to disaster recovery projects in order to expedite the recovery process. As a result, a steering group led, in part, by FEMA and consisting of federal partners in emergency management, environmental quality, and historic preservation was established to develop and implement a more efficient process for federal EHP reviews for disaster recovery projects. In addition, according to HUD officials, while the Unified Federal Review process is underway, a Sandy-specific team—the Federal Sandy Infrastructure Permitting and Review Team— has been established to facilitate coordinated review and permitting of the certain infrastructure projects, as recommended in the Hurricane Sandy Rebuilding Strategy. In response, through the Unified Federal Review process, 11 federal agencies that perform environmental and historic preservation reviews during disaster recovery entered into a memorandum of understanding to coordinate their independent review processes in an attempt to expedite decision making and implementation of recovery projects. The Unified Federal Review Process was established and effective on July 29, 2014 through the Memorandum of Understanding Establishing the United Federal Environmental and Historic Preservation Review Process. It is too soon to evaluate the extent to which the Unified Federal Review, as implemented, has resulted in harmonized and streamlined review requirements for applicants. In keeping with the Unified Federal Review agreement, FEMA’s 2015 Hazard Mitigation Assistance Guidance specifies that the agency can accept EHP documentation from other federal agencies if the documentation addresses the scope of the FEMA-approved activity and FEMA verifies that it meets FEMA’s EHP compliance requirements. In addition, according to a FEMA official there are multiple complicating factors that could affect applicants undergoing EHP review; however, the official stated that the vast majority of FEMA projects—when the environmental planning and historic preservation review process is begun early during project planning—are not delayed. Moreover, a senior FEMA official responsible for EHP compliance noted that during EHP reviews the act of considering various alternatives can actually result in solutions that promote greater disaster resilience. Although states are usually the grant recipients for PA, HMGP, and CDBG, local partners or federally-recognized Indian tribes often plan and execute the projects these grants fund. Although state officials we interviewed reported widespread support for disaster resilience investment, they also reported some challenges with capacity and willingness at the local level. Illustrated in figure 20, 10 of 13 states and cities responding to our follow up survey said that the capacity of localities to access or manage federal funds for hazard mitigation was a challenge. In some cases localities do not have full-time staff dedicated to disaster resilience-related activities, and may have difficulty keeping hazard mitigation plans current, making grant applications, or monitoring and reporting on compliance with multiple grant requirements. For example, 11 of 13 respondents to our follow-up survey reported that local applicants may have difficulty collecting the information required to complete FEMA’s Benefit Cost Analysis Tool for their PA or HMGP applications, as shown in figure 21.Officials in 1 state said that the amount of documentation required for the benefit cost analysis limited the number of project applications the state was able to submit. In addition, some officials reported that localities or individual businesses or homeowners were not willing to pursue hazard mitigation opportunities because of competing concerns. For example, some communities may be hesitant to pursue acquisition activities, which result in permanently replacing homes or businesses with open space, because of the potential to diminish the tax base or limit future economic development opportunities. In our follow-up survey, 8 of 13 respondents said that the willingness of individuals to pursue hazard mitigation opportunities presented a challenge to their ability to maximize disaster resilience, as demonstrated in figure 22.communities with high flood risk may be unwilling to relocate. The President and Congress have taken multiple steps to enhance the federal government’s focus on disaster resilience, including issuing new EOs and presidential policy directives (PPD), and enacting SRIA. As we have previously concluded, complex interagency and intergovernmental efforts—such as the federal government’s focus on enhancing the nation’s disaster resilience—can benefit from a national strategy. The issuance of EOs and PPDs in the aftermath of Hurricane Sandy demonstrates the federal government’s focus on disaster resilience, linking hazard mitigation and recovery to break the cycle of damage- repair-damage. For example, the President signed EO 13632 on December 7, 2012 creating the Hurricane Sandy Rebuilding Task Force and charged the task force with developing the Hurricane Sandy Rebuilding Strategy. It also charged the task force with taking into account existing and future risks and promoting the long-term sustainability of communities and ecosystems in the Sandy-affected region. In addition, the EO on Sandy rebuilding called for the federal government to: 1. remove obstacles to resilient rebuilding in a manner that addresses existing and future risks and vulnerabilities and promotes long-term sustainability of communities, 2. plan for the rebuilding of critical infrastructure damaged by Hurricane Sandy in a manner that increases community and regional resilience in responding to future impacts, and 3. identify resources and authorities that can contribute to strengthening community and regional resilience as critical infrastructure is rebuilt. Further, in 2013, the President issued an EO titled Preparing the United States for the Impacts of Climate Change (EO 13653) and a PPD titled Critical Infrastructure Security and Resilience (PPD-21), both calling for the nation to manage risks in a way that makes the United States more resilient in the future. The President also issued a Climate Action Plan to improve the nation’s resilience to flooding and better prepare the nation for the impacts of climate change. The plan directs federal agencies to take appropriate actions to reduce risk to federal investments, specifically to “update their flood-risk reduction standards.” In January 2015, to further the Climate Action Plan, the President released EO 13690, Establishing a Federal Flood Risk Management Standard. The standard requires all future federal investments in, and affecting, floodplains to meet a certain elevation level, as established by the standard. Such agency actions include those in which federal funds are being used to build new structures and facilities, or to rebuild those that have been The new flood risk standard builds on work done by the damaged.Hurricane Sandy Rebuilding Task Force, which announced in April 2013 that all Sandy-related rebuilding projects funded by the Sandy Supplemental must meet a consistent flood risk reduction standard. In addition, Congress passed and the President signed SRIA. The law authorized several changes to the way FEMA may deliver federal disaster assistance. For example, it authorizes FEMA to use expedited procedures in HMGP. As a result, FEMA has issued guidance for streamlining the program and is planning actions to continue to refine the changes and measure their effectiveness. SRIA also allows FEMA to provide up to 25 percent of the estimated costs for eligible hazard mitigation measures to a state or tribal grantee before eligible costs are incurred. In addition, the Hurricane Sandy Rebuilding Strategy—as a result of EO 13632—recognized the need to institutionalize regional approaches to resilient planning and coordinate Sandy recovery infrastructure resilience projects. More specifically, one of the recommendations stated that MitFLG should institutionalize regional approaches to disaster resilience planning in the NDRF and NMF. In addition, federal agencies have taken a variety of actions to enhance regional resilience—particularly as they implemented select Hurricane Sandy Rebuilding Strategy recommendations aimed at enhancing the Sandy-affected region’s disaster resilience. The Hurricane Sandy Task Force recommendations related to disaster resilience and a brief status update for each recommendation are included in appendix II. As a result of the recommendations, the Sandy task force developed its Resilience Guidelines in the spring and summer of 2013. The guidelines are intended to ensure that federal agencies have a consistent approach to enhancing disaster resilience and to improve decision making to ensure wise investments by establishing criteria for those investments. The Task Force found that the main challenges involved complex interagency issues that called for a more streamlined approach to prioritizing the myriad of guidance, executive orders, frameworks, and plans related to disaster resilience. We have previously concluded that complex interagency and intergovernmental efforts can benefit from a national strategy. In 2004, we identified elements of an effective national strategy including: (1) identifying the purpose, scope, and particular national problems the strategy is directed toward; (2) establishing goals, priorities, milestones, and performance measures; (3) defining costs, benefits, and resource and investment needs; (4) delineating roles and responsibilities; and (5) integrating and articulating the relationship with related strategies’ goals, objectives, and activities. The NMF, by articulating a vision where the nation shares a culture of resilience and describing the national capabilities required to focus on disaster risk and resilience in everyday activities, to some extent serves as such a strategy in that it has begun to address purpose, scope, and responsibilities. Although the current framework—the first-ever version— may evolve in future updates to reflect the more expansive and nuanced understandings that come from sustained attention to an issue and lessons learned from recent and future events, it already serves the highest-level functions of an effective national strategy. What the nation lacks and the framework does not significantly address, however, is information; direction; and guidance for costs, benefits, and investments needed to ensure that the nation is prioritizing federal resources in the most effective and efficient manner possible. As previously described, states’ and localities’ experiences with the Hurricane Sandy recovery demonstrate that the fragmentation and the postdisaster emphasis inherent in the current approach to disaster resilience investment can create obstacles to most effectively marshaling resources toward the goal of overall risk reduction. In interviews, senior officials at FEMA and HUD who provide MitFLG leadership acknowledged that the current approach does not lead to the most efficient or effective disaster resilience investments. As one of these officials put it, the federal government’s current investments aimed at enhancing the nation’s disaster resilience—for instance, projects such as home acquisitions and elevations—have benefited individuals and, often, communities, but may not have effectively reduced states’ overall risk profiles. The official stated that there are better investments that could be made, bringing into question whether the federal government is getting the most effective return on its disaster resilience investments. Findings of the Sandy Task Force Report also align with some of the challenges states and city officials reported experiencing. For example, Sandy Task Force Infrastructure Resilience Guidelines found that there is significant overlap among various sets of guidelines, and, apart from regulatory requirements and agency mission, which take primacy, there is no guidance on prioritizing or differentiating across these sets of Further, the guidelines also found that relief from regulatory guidelines.and administrative processes may help communities recover and rebuild more quickly; however, the guidelines also warned that relief from these processes may contribute to decisions that are not aligned with resilience principles, because, for example, immediate needs following a disaster are prioritized over long-term goals—a condition that relates to challenges states and cities experience with both the general postdisaster emphasis and the inherent fragmentation in the postcatastrophe environment. Although there are benefits to investing in disaster-resilience in a postdisaster environment, there are challenges and tradeoffs that may limit effective risk reduction. According to MitFLG officials, the federal government may not have focused enough attention on pre-disaster hazard mitigation. A study endorsed by the American Society of Civil Engineers, the Association of State Flood Plain Managers, the National Emergency Management Association, and the International Code Council, among others, found that investing resources and capital to prevent harm before it occurs is a rational and logical course of action; however, social, political, and economic realities tend to drive public choice away from investments that attempt to eliminate or minimize disasters’ impacts before they occur. More comprehensive considerations afforded to the balance of pre- and postdisaster investments could help ensure better returns on investments designed to limit federal fiscal exposures by buying down risk. Moreover, information about the benefits of various types of investments and the context in which they are made—information that could guide decision makers at every level of government and in the private sector— is largely not available. Conducting a comprehensive study to assess the cost-benefit trade-offs and return on investment of hazard mitigation activities would require substantial investment and expertise. FEMA has developed a modeling methodology to assess the performance of flood mitigation projects—loss avoidance studies—drawing on experience with flood programs in actual postproject hazard events. However, modeling the difference between losses with and without hazard mitigation measures presents challenges, in part because of the lack of concrete data to inform assumptions that underpin the models. Another challenge is that savings depend on two highly uncertain variables—(1) the frequency and severity of future disasters affecting the property in which federal investments are made, and (2) the extent to which the federal government will bear the costs to recover from those disasters. However, multiple catastrophic events over the last decade—including Hurricanes Katrina, Rita, Wilma, Ike and Sandy—have resulted in the federal government bearing anywhere from 75 to 100 percent of the total recovery costs for FEMA eligible projects across 18 states. The return on investment of hazard mitigation also depends on the nature of the specific activities and their impact on the affected property and thus varies on a project-by-project basis. A 2005 Multihazard Mitigation Council study attempted to quantify the future savings (in terms of losses avoided) from hazard mitigation activities related to earthquake, wind, and flood funded through three major FEMA natural hazard mitigation grant programs—the Hazard Mitigation Grant Program, Project Impact, and the Flood Mitigation Assistance Program. The study results indicated that the natural hazard mitigation activities funded by the three FEMA grant programs between 1993 and 2003 were cost-effective and reduced future losses from earthquake, wind, and flood events by $4 for every dollar of investment. This figure has been cited in congressional hearings and other arenas to describe the benefits of hazard mitigation; however, it is dated and generalizes the benefit to all disasters based on a relatively narrow set of disaster-loss data. In recent months, leaders and experts from multiple sectors—including a former FEMA director and representatives from the insurance industry— have called for a more strategic approach to making disaster resilience investments. Without a comprehensive strategic approach to help Congress and federal agencies that implement disaster resilience-related programs prioritize, align, and guide federal investments, the federal government’s approach has been largely reactionary and fragmented. Further, the lack of a strategic approach increases the risk that the federal government and its nonfederal partners will experience lower returns on investments or lost opportunities to effectively mitigate critical lifelines against known threats and hazards. It also ignores the question of what the right balance of federal and nonfederal investment should be and whether incentives within the various statutes, regulations, and program policies are appropriately aligned to encourage that balance. Moreover, because states may rely on postdisaster federal funds to mitigate future risks, states may not be incentivized to dedicate resources to comprehensively address their overall risk profiles prior to a disaster occurring. An investment strategy to complement the National Mitigation Framework could help provide a more comprehensive and complete national strategy to help ensure that the federal government is receiving the most beneficial return on its disaster resilience funding activities. In particular, an investment strategy would help to ensure that federal funds expended to enhance disaster resilience achieve as effectively and efficiently as possible the goal of reducing the nation’s fiscal exposure as a result of climate change and the rise in the number of federal major disaster declarations. For example, an investment strategy could: identify the most critical components of disaster resilience, such as critical infrastructure, to help target financial resources in a way that would protect those components from future disasters; identify and oversee an approach to developing the information required to more effectively and accurately determine which, and under what circumstances, investments in disaster resilience reduce overall risk, and in turn reduce federal fiscal exposures to disasters; describe the appropriate balance of federal and nonfederal investment and help to identify how policymakers and program implementers should structure incentives to help reach this balance; and consider the current balance between pre- and postdisaster resource allocation and advise the President and Congress on the benefits and challenges of the current balance, including whether the nation should seek to take a more proactive position in funding and encouraging pre-disaster mitigation activities. A senior MitFLG official told us that executive-level leadership with decision-making power is necessary for MitFLG to be able to effect change. This is particularly important when multiple agencies are responsible for managing fragmented federal efforts, such as the nation’s efforts to enhance its overall disaster resilience. An investment strategy that complements the NMF could help support the ongoing leadership from the executive and legislative branches by identifying what new or amended federal policies, regulations, and laws are required to enhance the nation’s disaster resilience in the most efficient and effective way possible. From fiscal years 2004 to 2013, FEMA obligated over $95 billion in federal assistance for disaster recovery for presidentially declared major disasters during that period, and the number of major disaster declarations has increased significantly in recent decades. In the wake of Hurricane Sandy, the federal government has demonstrated increased focus on disaster resilience as a mechanism to limit the nation’s fiscal exposure to future disasters and has taken steps to improve states’ abilities to use federal disaster recovery funding to incorporate resilient rebuilding into recovery. However, state and local officials have still experienced challenges enhancing their states’ overall disaster resilience when using federal funds through FEMA’s Public Assistance and Hazard Mitigation Grant Programs. During the Sandy recovery, states and localities were in some cases constrained in their ability to pursue hazard mitigation activities using FEMA PA and HMGP funding streams. State and city officials we interviewed and surveyed reported experiencing several challenges in the implementation of FEMA’s PA and HMGP, including the complexities of the hazard mitigation planning process, FEMA PA and HMGP staff turnover, limitations on eligibility, and lack of FEMA support for PA mitigation during project formulation. In addition, officials reported being challenged by the manner in which HMGP estimates and final awards were determined, specifically, the timing of the final estimate of HMGP awards in tandem with the HMGP project application deadline. These challenges could result in missed opportunities to improve states’ disaster resilience when providing federal funding for that purpose. Further, such challenges may inhibit the federal government’s efforts to reduce vulnerabilities and integrate hazard mitigation into disaster recovery and its ability to meet risk reduction goals established in the NDRF and National Mitigation Framework. Assessing the challenges state and local officials reported, including the extent to which the challenges can be addressed and corrective actions can be implemented, as needed, may help ensure that FEMA’s hazard mitigation priorities are effectively reflected in implementation of the agency’s PA and HMGP programs. Although federal efforts helped to improve the nation’s disaster resilience during the recovery from Hurricane Sandy, a comprehensive federal strategy to prioritize and guide federal investments intended to enhance the nation’s overall disaster resilience has not been developed. The federal government primarily funds disaster resilience projects in the wake of disasters—when damages have already occurred and opportunities to pursue hazard mitigation may conflict with the desire for the immediate restoration of critical infrastructure. As the federal government’s fiscal exposure continues to be threatened by extreme weather, the increase in the number of major disaster declarations, and— according to some state officials—states’ reliance on the federal government to fund most of the costs associated with disaster response and recovery, it is critical that the federal government ensures that it is getting the best return on its disaster resilience investments. Also, federal programs that provide disaster resilience funding are fragmented, resulting in challenges to lowering the overall risk profiles of states and enhancing the nation’s disaster resilience from future disasters. An investment strategy to identify, prioritize, and guide future federal investments in disaster resilience could result in more effective returns on federal investments and enhance the federal government’s capacity to effectively mitigate critical lifelines against known threats and hazards. To increase states’ abilities to improve disaster resilience and mitigate future damage when using federal funding in the wake of disasters, we recommend that the FEMA Administrator take the following action: Consistent with the goals of the NDRF to integrate hazard mitigation and risk reduction opportunities into all major decisions and reinvestments during the recovery process, FEMA should assess the challenges state and local officials reported, including the extent to which the challenges can be addressed and implement corrective actions, as needed. To help the federal, state, and local governments plan for and invest in hazard mitigation opportunities to enhance resilience against future disasters, we recommend that the Director of the Mitigation Framework Leadership Group, in coordination with other departments and agencies that are MitFLG members, take the following action: Supplement the National Mitigation Framework by establishing an investment strategy to identify, prioritize, and guide federal investments in disaster resilience and hazard mitigation-related activities and make recommendations to the President and Congress on how the nation should prioritize future disaster resilience investments. Such a strategy could address, among other things, (1) the extent to which current hazard mitigation and disaster resilience programs are adequately addressing critical lifelines and critical infrastructure, (2) an approach to identifying information on what disaster resilience and hazard mitigation efforts are most effective against known risks and their potential impacts on the nation’s fiscal exposure, (3) the balance of federal and nonfederal investments, and (4) the balance of pre- and postdisaster resilience investments. We provided a draft of this report to DHS, HUD, DOT, and USACE for their review and comment. DHS provided written comments on July 21, 2015, which are summarized below and reproduced in full in appendix IV. DHS concurred with both of our recommendations and described planned actions to address them. In addition, DHS and HUD provided technical comments, which we incorporated into the report as appropriate. DOT and USACE had no comments on the draft report. DHS concurred with the first recommendation, that the FEMA Administrator, consistent with the goals of the National Disaster Recovery Framework (NDRF) to integrate hazard mitigation and risk reduction opportunities into all major decisions and reinvestments during the recovery process, assess the challenges state and local officials reported, including the extent to which the challenges can be addressed and implement corrective actions, as needed. DHS stated that FEMA is aware of and acknowledges the challenges state and local officials reported. FEMA is planning to seek input from federal, tribal, state, and local stakeholders as part of its efforts to reengineer the PA program, which they believe will address many of the issues raised in the report. In addition, in accordance with its strategic plan, FEMA is exploring ways to improve risk reduction through the Federal Insurance Mitigation Administration and Recovery mitigation programs, which will focus on three concurrent work streams: (1) policy, regulation, and statute; (2) codes and standards; and (3) operations. For example, FEMA will encourage states, tribes, and localities to adopt and enforce the most current version of the International Building Code and the International Resilience Code. DHS anticipates these efforts, among others, to be complete by December 31, 2016. These actions, if they include an assessment of the challenges identified by state and local officials, could address our recommendation and help ensure that FEMA meets its goal to integrate hazard mitigation into all major decisions and reinvestments during the recovery process. DHS also concurred with our second recommendation that the Director of MitFLG, who is a FEMA official, in coordination with departments and agencies that are MitFLG members supplement the National Mitigation Framework by establishing an investment strategy to identify, prioritize, and guide federal investments in disaster resilience and hazard mitigation–related activities and make recommendations to the President and Congress on how the nation should prioritize future disaster resilience investments. DHS stated that MitFLG recognizes the benefit of prioritizing federal investments to identify those with the best potential to enhance resilience against future disasters. DHS also stated that although MitFLG does not have the authority to compel other federal agencies to prioritize their funding to achieve a specific goal, it is working with the interagency group on a variety of resilience activities. We recognize that MitFLG does not have the authority to compel other federal agencies to comply with the recommendations developed as part of the investment strategy. However, we believe creating a strategy that helps guide federal decisions makers across the interagency group—including recommendations to the executive and legislative branches of the federal government on how to best prioritize federal resources aimed at enhancing disaster resilience—would be consistent with MitFLG’s purpose, which is to coordinate mitigation efforts across the federal government. We also believe that as the interagency group established expressly for this purpose, MitFLG is the most appropriate organizational entity to undertake the creation of this strategy. DHS stated that the Chair of MitFLG will take the following three steps to address the recommendation: 1. brief MitFLG members on our recommendation and FEMA’s response on behalf of MitFLG and call for work group members from the interagency group for support by August 31, 2015; 2. form a working group to develop the scope, coordinate the effort, and develop a draft of the recommendations for MitFLG to consider by September 30, 2016; and, 3. finalize a deliverable through MitFLG review and coordination with the interagency membership by August 30, 2017. DHS stated that the estimated completion date for implementing this recommendation is September 30, 2017. These actions could address our recommendation and help the nation prioritize federal resources to further enhance national resilience against future disasters. We will continue to monitor the efforts to implement our recommendations. We will send copies of this report to the Secretaries of Homeland Security, Housing and Urban Development, Defense, Transportation; the FEMA Administrator; and appropriate congressional committees. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in appendix IV. This report examines (1) how states and localities in the Sandy-affected area have used federal funds to help enhance disaster resilience, (2) the extent to which state officials report being able to use federal programs to maximize resilience-building during disaster recovery, and (3) actions the federal government has taken to promote disaster resilience in the recovery effort and what, if any, improvements could be made for future large-scale disasters. To determine how states and localities used federal funds to enhance disaster resilience during the Hurricane Sandy disaster recovery effort, we reviewed program documentation—such as grant guidance and federal rules—and discussed program purposes with key agency officials to determine whether and how administered programs and activities facilitate community and regional resilience as part of rebuilding. We obtained information on appropriations from the Disaster Relief Appropriations Act, 2013 (Sandy Supplemental), and the information related to the purposes of programs and activities from the Sandy Supplemental and federal agency documents. We focused on describing five federal programs that have the ability to support resilience-building efforts and that are administered by four federal agencies that received 92 percent of the Sandy Supplemental. We collected and analyzed information from the District of Columbia, New York City, and each of the 12 states that had a major disaster declaration about the types of resilience-building projects for which they used federal funds to enhance resilience as part of the Hurricane Sandy recovery effort. We also reviewed state hazard mitigation plans, local hazard mitigation plans and guidance, and information regarding large-scale state projects from Federal Emergency Management Agency (FEMA) headquarters, FEMA’s Sandy Recovery Office, and state officials. During a site visit to New Jersey, 1 of the 2 states that sustained the most damage, we also toured damaged areas and projects in progress to observe and discuss planned resilience-building efforts. To determine to what extent selected state officials reported being able to use federal programs to maximize resilience as part of the Sandy recovery effort we obtained information about the resilience-building efforts from data requests, structured interviews, and a follow-up survey we conducted with State Hazard Mitigation Officers and other knowledgeable officials in the 13 states that received presidential major disaster declarations in the wake of Hurricane Sandy. In seven of the interviews, State Hazard Mitigation Officers were joined by their state counterparts or supervisors in state emergency management departments with responsibility for managing other aspects of recovery efforts. We also administered the structured interview and survey with officials from New York City’s Office of Recovery and Resiliency, which administered some streams of relevant federal funds—including FEMA Hazard Mitigation Grant Program (HMGP) and Public Assistance (PA) and the Department of Housing and Urban Development (HUD) Community Development Block Grant-Disaster Recovery (CDBG-DR)— and oversees strategic planning for resilience efforts, the New York Governor’s Office of Storm Recovery, which is largely responsible for administering FEMA HMGP and HUD CDBG-DR funds, and the New Jersey Governor’s Office of Recovery and Rebuilding, which coordinates the state’s recovery effort, including overseeing resilience priorities. In New York and New Jersey the governors’ offices collaborated with the state emergency management offices (particularly the State Hazard Mitigation Officers) to complete the survey. In the data calls, we requested that State Hazard Mitigation Officers, in coordination with other knowledgeable state officials, identify the names of federal and state funding streams that were available for hazard mitigation projects and those that were used for projects during the Sandy recovery. We also requested a comprehensive list, or selected examples, of hazard mitigation projects that their states had planned or underway. We developed structured interview questions to collect information about officials’ experiences using federal funding to enhance resilience in recovering from Hurricane Sandy and other disasters that occurred since 2011, and successes or challenges states have encountered in trying to rebuild resilience. We chose 2011 because the Sandy Supplemental directed these funds to be available for areas most impacted and distressed as a result of Presidentially-declared major disasters from 2011-2013.interviews, we had open-ended, unstructured interviews with three State Hazard Mitigation Officers from 3 states outside the Sandy-affected area—Florida, Iowa, and Tennessee—and multiple professional associations about what kind of information was available and about specific terminology within the field. We then pretested the structured interview protocol with State Hazard Mitigation Officers from three Sandy- affected states. We conducted these pretests to ensure the questions were clear and unbiased and that the questionnaire did not place an undue burden on respondents. An independent reviewer within GAO also reviewed a draft of the questionnaire prior to the administration of the interviews. We made appropriate revisions to the content and format of the questionnaire based on the pretests and independent review. To begin development of the data calls and structured We conducted the structured interviews in-person and via telephone from August 26, 2014 to December 2, 2014. The interviews were primarily conducted in person, with the exception of interviews with officials from 3 states because of scheduling conflicts. On the basis of a content analysis of the information gathered in the structured interviews, we developed close-ended questions for the follow-up survey where we asked officials whether their states experienced specific challenges identified during the interviews, and the extent to which these challenges affected states’ ability to maximize federal support for enhancing disaster resilience. We conducted survey pretests with State Hazard Mitigation Officers from 2 states and governor’s office officials from 1 state. An independent reviewer within GAO also reviewed a draft of the questionnaire prior to administration of the survey. We made appropriate revisions to the content and format of the questions based on feedback from the pretests and independent review. The final questionnaire is in appendix III. We sent the survey questionnaire by email in an attached Microsoft Word form that respondents could return electronically after completing it. When we completed the final survey questions and format, we sent the questionnaire with a cover letter on March 4, 2015. On March 13, 2015, we sent a reminder email to everyone who had not responded, attaching an additional copy of the questionnaire. Following this reminder, we conducted follow-up with participants on an individual basis. Completed questionnaires were accepted until April 29, 2015. In all, we received completed questionnaires from officials in 13 states and cities. We also conducted five follow-up phone calls and one email exchange with officials who responded to our survey. The purpose of these follow- ups was to clarify the answers of respondents in the case of (1) questions that were left blank on the completed questionnaire, (2) multiple responses being chosen for a single question, or (3) responses that indicated that an item listed was not a challenge but also the challenge had reduced their state’s ability to maximize resilience opportunities to some extent. We adjusted the responses recorded on these officials’ questionnaires to reflect the clarifications made during these phone calls. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling errors. For example, we performed pretesting and obtained internal review with independent survey experts. In addition, an independent analyst checked the database used to collect survey responses against the questionnaires completed by survey respondents to ensure that all data were recorded correctly. The structured interviews and surveys were administered in a selected group of states and are not generalizable to the nation as a whole. However, they represent the entire population of states involved in the recovery from Hurricane Sandy. The states span 4 of 10 FEMA regions and multiple geographic regions of the eastern United States. In interviews and the follow-up survey we discussed the Hurricane Sandy recovery effort, as well as recovery from smaller disasters that occurred since 2011. Accordingly, the results of the interviews and surveys offer insights into the recent experiences nonfederal users have had when building resilience during disaster recovery. The overall response rate for the surveys is 92 percent. We compared information we learned from interviews with federal, state, and local officials and from federal documents with the goals stated in the National Disaster Recovery Framework (NDRF) and National Mitigation Framework (NMF). Specifically, these policies call for the government to integrate hazard mitigation and risk reduction opportunities into all major decisions and reinvestments during the recovery process and to capitalize on opportunities during the recovery process to further reduce vulnerability. To determine what actions the federal government took to promote resilience in the Hurricane Sandy recovery effort, and what, if any, improvements could be made for future large-scale disasters, we reviewed federal statutes, regulations, executive orders, and federal studies related to hazard mitigation and resilience. These included the Disaster Relief Appropriations Act of 2013 (the Sandy Supplemental), the Sandy Recovery Improvement Act of 2013 (SRIA), the President’s Executive Order (EO) 13632—Establishing the Hurricane Sandy Rebuilding Task Force, and the 2013 Hurricane Sandy Rebuilding Strategy: Stronger Communities, A Resilient Region. We also analyzed the recommendations of the Hurricane Sandy Task Force report that were intended to help facilitate or remove obstacles to resilience. We obtained information about the status of implementing the recommendations in the task force report from FEMA, HUD and Transportation (DOT) and the U.S. Army Corps of Engineers (USACE) via documents and interviews with officials involved in the Hurricane Sandy recovery. In addition, we obtained information on the status of implementing resilience-building- related provisions of SRIA from FEMA officials. We interviewed officials representing HUD, FEMA, and the interdepartmental Mitigation Framework Leadership Group (MitFLG) to discuss the challenges state officials reported to us and challenges experienced at the federal level. As evidenced by the various recipients of federal appropriations in the Sandy Supplemental, both disaster recovery and building disaster resilience to reduce the federal fiscal exposure to future disaster losses is a mission that cuts across federal departments. Therefore, we compared the challenges reported by state and federal officials with elements of a national strategy that we have previously recommended to help support such efforts. We conducted this performance audit from November 2013 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Executive Order (EO) 13632 on Establishing the Hurricane Sandy Rebuilding Task Force established and charged the Hurricane Sandy Rebuilding Task Force with identifying actions that federal agencies can take to enhance resilient rebuilding. The task force developed the Hurricane Sandy Task Force Rebuilding Strategy, which consists of 69 recommendations to federal agencies and working groups. We identified 19 recommendations that had aspects of resilience-rebuilding, as described by EO 13632, and had at least one of the four agencies we chose to review as part of the scope of this report—Department of Transportation (DOT), Federal Emergency Management Agency (FEMA), Department of Housing and Urban Development (HUD), and U.S. Army Corps of Engineers (USACE)—designated as a lead or supporting agency for implementing the recommendation. The table below reflects the status and progress of the implementation of the recommendations, as reported in the Rebuilding Strategy, subsequent progress updates (spring and fall 2014), and interviews with agency officials. In addition to the contact named above, Kathryn Godfrey (Assistant Director), Dorian Dunbar, Melissa Duong, Serena Epstein, Lorraine Ettaro, Eric Hauswirth, R. Denton Herring, and Tracey King made significant contributions to this report. Also contributing to this report were Joel Aldape, Claudia Becker, Andrew Brown, Alicia Cackley, Martha Chow, Steve Cohen, Catherine Colwell, Roshni Dave, Katherine Davis, Peter Del Toro, Anne-Marie Fennell, Jose (Alfredo) Gomez, Joah Iannotta, Valerie Kasindi, Stuart (Stu) Kaufman, Chris Keisling, Monica Kelly, Stephen Lord, Phillip McIntyre, Susan Offutt, Anthony (Tony) Pordes, Brenda Rabinowitz, Oliver Richard, Tovah Rom, Michelle Sager, Janet Temko-Blinder, Joseph Thompson, and David Wise.
The Disaster Relief Appropriations Act of 2013 appropriated about $50 billion for recovery from Hurricane Sandy, part of which was intended for disaster resilience and hazard mitigation. In March 2015, GAO identified the cost of disasters as a key source of federal fiscal exposure. GAO and others have advocated hazard mitigation to help limit the nation's fiscal exposure. GAO was asked to review federal efforts to strengthen disaster resilience during Hurricane Sandy recovery. This report addresses (1) how federal recovery funds were used to enhance resilience, (2) the extent to which states and localities were able to maximize federal funding to enhance resilience; and (3) actions that could enhance resilience for future disasters. To conduct this work, GAO reviewed key federal documents such as the National Mitigation Framework , interviewed federal officials responsible for programs that fund disaster resilience, and administered structured interviews and surveys to all 12 states, the District of Columbia, and New York City in the Sandy affected-region. During the Hurricane Sandy Recovery, five federal programs—the Federal Emergency Management Agency's (FEMA) Public Assistance (PA), Hazard Mitigation Grant Program (HMGP), the Federal Transit Administration's Public Transportation Emergency Relief Program, the Department of Housing and Urban Development's Community Development Block Grant-Disaster Recovery, and the U.S. Army Corps of Engineers' Hurricane Sandy program—helped enhance disaster resilience—the ability to prepare and plan for, absorb, recover from, and more successfully adapt to disasters. These programs funded a number of disaster-resilience measures, for example, acquiring and demolishing at-risk properties, elevating flood-prone structures, and erecting physical flood barriers. State and local officials from the states affected by Hurricane Sandy GAO contacted reported that they were able to effectively leverage federal programs to enhance disaster resilience, but also experienced challenges that could result in missed opportunities. The challenges fell into three categories: implementation challenges with PA and HMGP—for example, officials reported that FEMA officials did not always help them pursue opportunities to incorporate mitigation into permanent construction recovery projects; limitations on comprehensive risk reduction approaches in a postdisaster environment—for example, officials reported difficulties with navigating multiple funding streams and various regulations of the different federal programs funded after Hurricane Sandy; and local ability and willingness to participate—for example, officials reported that some home and business owners were unwilling or unable to bear the required personal cost share for a home-elevation or other mitigation project. FEMA officials told us that they were aware of some of these challenges and recognize the need to further assess them. Assessing the challenges and taking corrective actions, as needed, could help enhance disaster resilience. There is no comprehensive, strategic approach to identifying, prioritizing and implementing investments for disaster resilience, which increases the risk that the federal government and nonfederal partners will experience lower returns on investments or lost opportunities to strengthen key critical infrastructure and lifelines. Most federal funding for hazard mitigation is available after a disaster. For example, from fiscal years 2011-2014, FEMA obligated more than $3.2 billion for HMGP postdisaster hazard mitigation while the Pre-Disaster Mitigation Grant Program obligated approximately $222 million. There are benefits to investing in resilience postdisaster. Individuals and communities affected by a disaster may be more likely to invest their own resources while recovering. However, there are also challenges. Specifically, the emphasis on the postdisaster environment can create a reactionary and fragmented approach where disasters determine when and for what purpose the federal government invests in disaster resilience. The Mitigation Framework Leadership Group (MitFLG) was created to help coordinate hazard mitigation efforts of relevant local, state, tribal, and federal organizations. A comprehensive investment strategy, coordinated by MitFLG, could help address some challenges state and local officials experienced. GAO recommends that (1) FEMA assess the challenges state and local officials reported and implement corrective actions as needed and (2) MitFLG establish an investment strategy to identify, prioritize, and implement federal investments in disaster resilience. The Department of Homeland Security agreed with both.
Agencies implement specific elements of laws through regulations. One of the main purposes of guidance is to explain and help regulated parties comply with agency regulations. As shown in figure 1, agencies use guidance documents to explain how they plan to interpret regulations. Agencies also use guidance for circumstances they could not have anticipated when issuing a regulation and when additional clarifications are needed. Similarly, our past work has recommended that agencies issue guidance to clarify policies when we found confusion among grantees or others about a component’s policy or practices. How Does OMB Define Guidance? OMB defines the term “guidance document” as an agency statement of general applicability and future effect, other than a regulatory action, that sets forth a policy on a statutory, regulatory, or technical issue or an interpretation of a statutory or regulatory issue. Guidance documents often come in a variety of formats and names, including interpretive memoranda, policy statements, guidances, manuals, circulars, memoranda, bulletins, advisories, and the like. Guidance documents include, but are not limited to, agency interpretations or policies that relate to: the design, production, manufacturing, control, remediation, testing, analysis or assessment of products and substances, and the processing, content, and evaluation/approval of submissions or applications, as well as compliance guides. Guidance documents do not include solely scientific research. publish the final regulation.creating binding legal obligations. Regulations are generally subject to judicial review by the courts if, for example, a party believes that an agency did not follow required rulemaking procedures or went beyond its statutory authority. Regulations affect regulated entities by To balance the need for public input with competing societal interests favoring the efficient and expeditious conduct of certain government affairs, the APA exempts certain types of rules from the notice and comment process. These include “interpretative rules” and “general statements of policy” that can be made effective immediately upon publication in the Federal Register. and general statements of policy, agencies use guidance documents that may not be published in the Federal Register. In addition, agencies may use the preambles of their regulations to further interpret the regulations. An agency may use any of these documents to provide more explanation on how the agency plans to interpret a regulation. Defining guidance can also be difficult. To illustrate that difficulty, several of the components in our scope told us that they do not consider many of the communication documents they issue to the public to be guidance. 5 U.S.C. §§ 552(a), 553(b), 553(d). There is general agreement that the public interest is served by prompt dissemination of the guidance contained in agency interpretations and policy statements. Jeffrey S. Lubbers, A Guide to Federal Agency Rulemaking, 5th Edition (Chicago, Illinois; American Bar Association, 2012) p. 64. Agency guidance documents are not legally binding. Agencies sometimes include disclaimers in their guidance to note that the documents have no legally binding effect on regulated parties or the agencies. Even though not legally binding, guidance documents can have a significant effect on regulated entities and the public, both because of agencies’ reliance on large volumes of guidance documents and the fact that the guidance can prompt changes in the behavior of regulated parties and the general public. Due to the potential for these effects, agencies’ use of guidance has been the subject of scrutiny from policymakers and the public. Despite the general distinctions between regulations and guidance documents, legal scholars and federal courts have at times noted that it is not always easy to determine whether an agency action should be issued as a regulation subject to the APA’s notice and comment requirements or is guidance or a policy statement, and therefore exempt from these requirements. Among the reasons agency guidance may be legally challenged are procedural concerns that the agency inappropriately used guidance rather than the rulemaking process or concerns that the agency has issued guidance that goes beyond its authority. Other concerns raised about agency use of guidance include consistency of the information being provided, currency of guidance, and whether the documents are effectively communicated to those affected. Although the APA does not generally prescribe processes for review of agency guidance, the OMB Bulletin establishes policies and procedures for the development, issuance, and use of “significant” guidance documents. The Bulletin defines “significant guidance document” as a guidance document disseminated to regulated entities or the general public that may reasonably be anticipated to (1) lead to an annual effect on the economy of $100 million or more or adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or state, local, or tribal governments or communities; (2) create a serious inconsistency or otherwise interfere with an action taken or planned by another agency; (3) materially alter the budgetary impact of entitlements, grants, user fees, or loan programs or the rights and obligations of recipients thereof; or (4) raise novel legal or policy issues arising out of legal mandates, the President’s priorities, or the principles set forth in Executive Order 12866, as further amended. Pursuant to a memo issued by the Director of OMB in March 2009, OMB’s Office of Information and Regulatory Affairs (OIRA) reviews some significant guidance documents prior to issuance. All significant guidance documents, whether reviewed by OIRA or not, are subject to the OMB Bulletin. “Economically significant guidance documents” (those guidance documents under the first item in the definition above) are also published in the Federal Register to invite public comment. The OMB Bulletin directs each agency to develop written procedures for the approval of significant guidance, establishes standard elements that must be included in significant guidance documents, and requires agencies to maintain a website to assist the public in locating significant guidance documents. Non-significant guidance is not subject to the OMB Bulletin, and guidance procedures are left to agency discretion. How can guidance be significant? Guidance documents are considered “significant” when they have a broad and substantial impact on regulated entities, on the public or on other federal agencies. Guidance can have coercive effects or can lead parties to alter their conduct. For example, under a statute or regulation that would allow a range of actions to be eligible for a permit or other desired agency action, a guidance document might specify fast track treatment for a particular narrow form of behavior, but subject other behavior to a burdensome application process with an uncertain likelihood of success. Even if not legally binding, such guidance could affect behavior in a way that might lead to an economically significant impact. Similarly, an agency might make a pronouncement about the conditions under which it believes a particular substance or product is unsafe. While not legally binding, such a statement could reasonably be anticipated to lead to changes in behavior by the private sector or governmental authorities such that it would lead to a significant economic effect. For example, the following guidance documents issued by our audited agencies were considered significant: In response to questions from state officials, DOL’s Employment and Training Administration issued guidance in 2005 clarifying that distance learning can be considered approvable classroom training under the Trade Adjustment Assistance program. In response to requests for technical assistance, Education’s Office for Civil Rights determined that elementary and secondary schools and postsecondary institutions could benefit from additional guidance concerning Title IX obligations to address sexual violence. In 2014, the office issued guidance that included questions and answers to further clarify legal requirements, and discussed the interplay between applicable laws and proactive efforts schools can take to prevent sexual violence. Certain provisions of the OMB Bulletin were informed by written agency practices established by the Food and Drug Administration (FDA) for the initiation, development, issuance, and use of its guidance documents. In 1997, Congress established certain aspects of FDA’s guidance processes as law and directed the agency to evaluate the effectiveness of its practices and develop and issue regulations specifying its procedures.FDA’s Good Guidance regulations define the subset of guidance that must be published in the Federal Register.internal policies and practices to ensure appropriate adherence to their good guidance practices, including a written process to document decisions about the appropriate level of review for each guidance document. FDA officials told us that by default, guidance will receive a higher level of review unless a justification is presented to warrant lesser review. FDA officials told us that they use tools, such as “guidance initiation forms” or “concept papers” to, among other things, ensure they avoid duplicative or overlapping guidance and to prioritize proposed guidance. Component officials used guidance for multiple purposes, including interpreting regulations, disseminating suggested practices, and providing grant administration information. Interpret new regulations. Component officials told us they used guidance to summarize regulations or explain ways to meet regulatory requirements. For example, Education officials told us that they often follow their regulations with guidance issued to restate the regulation in plainer language, to summarize requirements, to suggest ways to comply with the new regulation, or to offer best practices. Occupational Safety and Health Administration (OSHA) officials told us that they issue guidance to help employers and workers understand their legal rights and responsibilities. Distribute information on suggested practices. Officials told us that they used guidance to distribute information on program suggestions sometimes called best practices. In particular, component officials who administered formula grants in which wide discretion is given to grantees, such as states, told us that they often used guidance to encourage certain leading practices. For example, the Administration for Children and Families (ACF) Office of Child Care issued Information Memorandums encouraging partnerships between state child care and child welfare agencies. Provide guidance on grant administration. Components that administered grants also issued procedural guidance related to grant administration. For example, the Bureau of Labor Statistics (BLS) issued routine administrative memoranda to remind state partners of federal grant reporting requirements. The impetus for developing and issuing guidance varied, including reasons such as: (1) explaining new regulations, (2) responding to questions from external stakeholders, (3) clarifying policies in response to compliance findings, and (4) disseminating information on leadership priorities and initiatives. Explaining new regulations. Components initiated guidance in coordination with publication of a new regulation to help regulated entities understand new requirements. For example, in 2014, DOL’s Wage and Hour Division released both a fact sheet and Frequently Asked Questions (FAQs) to coincide with the issuance of its final rule establishing a minimum wage for federal contractors. Clarifying policies in response to questions. Multiple component officials told us that they used guidance to clarify policies in response to questions received from the field or regional office input about questions they are receiving from grantees or regulated entities. DOL’s Office of Labor-Management Standards officials told us that ideas for guidance often come from questions from the field or the regulated community, particularly if multiple unions had similar questions about a new regulation. Clarifying policies in response to compliance findings. Officials at Education’s Office for Civil Rights and OSHA told us that they often initiated guidance in response to findings resulting from their investigatory or monitoring efforts, among other things. USDA Food and Nutrition Service officials told us they have issued guidance in response to our or Office of Inspector General audits. Disseminating information on leadership priorities and initiatives. In some cases, component officials told us they may issue guidance in response to directives from senior management or in response to administration priorities. ACF’s Office of Child Care officials told us that they use Information Memorandums to emphasize leadership or other legislative priorities and changes. Officials at Education’s Office of Postsecondary Education told us that component leadership initiates guidance related to priorities the administration wants to accomplish. When we asked for an example, they noted that a Dear Colleague letter the Office issued explaining that students confined or incarcerated in locations such as juvenile justice facilities were eligible for federal Pell grants was issued in response to inquiries from the field as well as administration priorities. We found that agencies did not use standard terminology for their guidance. Components we reviewed within USDA, HHS, Education, and DOL used many names for their external guidance documents. Some departments or components generally used uniform names for guidance; at other departments components used different names for guidance. Education offices often issued Dear Colleague letters or FAQs, among other types of guidance. Within HHS, ACF officials told us that they consistently used Program Instructions and Information Memorandums to communicate information to grantees and other recipients. While many DOL components issued documents to assist with regulatory compliance, they used varied terms for that guidance, including bulletins, Administrator Interpretations, directives, fact sheets, and policy letters. The amount of guidance components issued varied, ranging from about ten guidance documents to over a hundred documents issued by a component in a single year. BLS officials told us that they issued about ten routine administrative memorandums each year related to the operation of two cooperative agreement statistical programs. In contrast, OSHA officials told us they could easily produce 100 new or updated products each year to provide guidance to stakeholders. Component officials cited the varying missions or types of programs as one explanation for the different amounts of guidance. OSHA regularly issued guidance to assist with regulatory compliance, while BLS officials told us that, as a non-regulatory component, they rarely issued guidance. Although the Office of Workers’ Compensation Programs has regulatory authority, officials told us that they did not frequently issue guidance because their authorizing statutes have not changed recently and their programs focus on administering benefits. Officials at the Office of Child Care and at DOL’s Employment and Training Administration (ETA) told us they rarely issued guidance for formula or block grant programs or were generally limited to issuing guidance with encouragements and recommendations. These programs include the Office of Child Care’s Child Care Development Fund Grants and ETA’s Workforce Investment Act formula grants (both to states), which provide wide discretion to grantees. Officials considered a number of factors before deciding whether to issue guidance or undertake rulemaking. However, a key criterion in making this decision was whether they intended for the document to be binding. Officials from all components that issue regulations told us that they understood when guidance would be inappropriate and when regulation was necessary and consulted with legal counsel as they decided whether to initiate rulemaking or issue guidance. Officials told us that they often based the decision between guidance and regulation on whether the direction was meant to be binding (in which case they issued a regulation). In some cases, issued guidance clarified existing regulations, educated the public, addressed particular circumstances, or shared leading practices. According to DOL officials, if components determined that current regulations could not reasonably be interpreted to encompass the best course of action, the solution was not case-specific, or the problem was very wide-spread, then they may determine that issuing a new regulation was necessary. An Education department official told us that they considered multiple factors, including the objective to be achieved, when choosing between guidance and regulations. For example, they used a regulation to fill in gaps in statutory provisions. Following issuance of regulations, they also provided guidance in the form of technical assistance. Similarly, HHS Administration for Community Living officials told us that they considered a number of factors, including whether the instructions to be disseminated were enforceable or merely good practice. For example, they noticed that states were applying issued guidance related to technical assistance and compliance for the state long-term care ombudsman program differently. Administration for Community Living officials decided it would be best to clarify program actions through a regulation, as they could not compel the states to comply through guidance. They believed that a regulation would ensure consistent application of program requirements and allow them to enforce those actions. They issued the proposed rule in June 2013. USDA’s Food and Nutrition Service (FNS) officials told us that the decision to issue guidance or undertake rulemaking depended on (1) the extent to which the proposed document was anticipated to affect stakeholders and the public, and (2) what the component was trying to accomplish with the issued document. OIRA staff concurred that agencies understood what types of direction to regulated entities must go through the regulatory process. In a few cases, components used guidance to alert affected entities about immediate statutory requirements or to anticipate upcoming requirements to be promulgated through the rulemaking process. While this may provide timely information about new or upcoming requirements, it also may cause confusion as details are revised during the rulemaking process. FNS officials told us that when a new statute becomes effective immediately and there is little ambiguity in how the statute can be interpreted, they use a “staging process.” In this process, they issue informational guidance so their partners are aware of and consistently understand new requirements before the more time-consuming rulemaking process can be completed. In 2014, we reported that five FNS memorandums related to new statutory requirements for the content of school lunches were distributed prior to the issuance of the final rule on the changes to the content and nutrition standards for school lunches, in part because of statutory timeframes. As FNS implemented the finalized regulation, it also issued guidance containing new flexibilities or substantive changes to previously issued guidance. While state and school food authority officials said that some of these changes were likely made by USDA to respond to problems they were having implementing the new lunch requirements, the guidance changes were difficult to keep up with and led to increased confusion about the requirements. Education department officials told us they often used guidance to help the field understand and apply new statutory requirements. Other officials told us that in rare instances, they have issued guidance while a proposed rule is out for comment. They noted that statutory deadlines for implementation may necessitate the issuance of guidance prior to the issuance of a final rule. For example, in 2014 DOL’s ETA issued an Unemployment Insurance Program Letter interpreting statutory provisions on permissible drug testing of Unemployment Insurance applicants. ETA noted that the program letter did not provide interim guidance on the substance of the rule, but interpreted related statutory provisions. The guidance stated that certain provisions related to the regulation could not be implemented until publication of the final regulation. ETA officials told us they will issue revised guidance after issuance of the final regulation. Unemployment Insurance Program Letter 1-15, Permissible Drug Testing of Certain Unemployment Compensation Applicants Provided for in Title II, Subtitle A of the Middle Class Tax Relief and Job Creation Act of 2012. 79 Fed. Reg. 61,013 (Oct. 9. 2014) (proposed rule). future guidance. However, the website was not updated in a timely manner when the regulation was issued as final, which could have caused confusion. ACF’s Office of Child Care issued recommendations to its grantees that foreshadowed future binding requirements. For example, in September 2011 the Office issued an Information Memorandum recommending criminal background checks. It later published a proposed rule in May 2013 to mandate the recommendations as requirements. Selected departments considered few of their guidance documents significant as defined by OMB. Education considered a greater number of its guidance documents significant, while USDA and DOL issued relatively few significant guidance documents. We were unable to determine the number of significant guidance documents issued by HHS. As of February 2015, Education listed 139 significant guidance documents on its website, while DOL listed links to 36 pieces of significant guidance on its website. USDA listed links to 34 significant guidance documents on its webpage for significant guidance (see figure 2). All selected components told us that they did not issue any economically significant guidance. OIRA staff told us they accepted departments’ determinations of which types of guidance meet the definition of significant guidance. However, given the unique circumstances of each program, the selected departments differed in their interpretation of the significance of guidance issued to explain eligibility changes resulting from the Windsor Decision Education deemed its initial guidance recognizing same-sex marriages.explaining eligibility changes for student aid in response to Windsor to be non-significant, but considered later guidance providing clarifications significant. HHS and USDA did not consider corresponding guidance they released to be significant. Officials at several components told us that, rather than invest resources in OMB review of significant guidance, they would typically decide to propose a regulation, which would allow them to assert binding requirements. Employee Benefit and Security Administration officials told us they would ordinarily use a regulation if they were considering a guidance project that would meet the OMB bulletin definition of significant or economically significant guidance. Officials at Labor’s Employment and Training Administration told us they would typically opt to issue a rule rather than guidance if the content was considered significant. OIRA staff told us that OIRA examiners work closely with department officials and may discuss what types of documents warrant OIRA review. What are the OMB Good Guidance Practices for Approval of Significant Guidance? OMB’s Final Bulletin for Agency Good Guidance Practices requires agencies to develop the following procedures for approval of significant guidance: Agencies should develop or have written procedures for the approval of significant guidance documents. Those procedures shall ensure that the issuance of significant guidance documents is approved by appropriate senior agency officials. Education and USDA had written procedures for the approval of significant guidance as directed by OMB. However, HHS did not. While DOL had written approval procedures, they were not available to appropriate officials and DOL officials noted that they required updating. Education and USDA’s written procedures explained the approval and clearance procedures for significant guidance. DOL officials told us that, although their written procedures were not readily available during our audit, officials had been trained in 2007 on review and approval of significant guidance documents. HHS officials told us that each component tracked guidance development differently and a lack of written procedures did not mean that guidance did not receive appropriate departmental review. However, without written procedures or wide knowledge of these procedures—a basic internal control standard—HHS and DOL may be unable to ensure that their components consistently follow other requirements of the OMB bulletin, such as required standard elements for significant guidance, and cannot ensure consistency in their processes over time. As previously discussed, the Administrative Procedures Act does not establish standards for the production of guidance. Therefore, departments and components produce and issue the bulk of guidance— guidance that is considered non-significant—without government-wide standards for those processes. In the absence of government-wide standards for the production of non-significant guidance, officials must rely upon internal controls—which are synonymous with management controls—to ensure that guidance policies, processes, and practices achieve desired results and prevent and detect errors. By incorporating internal control standards, departments and components can promote consistent application of management processes. We identified four selected components of internal control and applied them to agencies’ guidance processes (see table 2 below). Departments and components identified diverse and specific practices that addressed these four components of internal control. While all departments and components identified standard processes for internal review of their guidance documents, these processes were typically not documented. Further, agencies did not consistently apply other components of internal control. The following sections identify practices that select components have used to address these internal controls, as well as opportunities for broader application of these practices. Internal controls help to guide departments and components in the guidance production process in a fluid environment where processes and anticipated goals for non-significant guidance vary. Even within components, types of guidance may need to be treated differently or may warrant differing levels of review. A level of standardization of the process that may be appropriate for one component or type of guidance may be inappropriate for another. For information on specific components’ processes for initiation, development, review, dissemination, and tracking and evaluation of guidance, see appendix II. Agencies identified and discussed risk as they initiated guidance, prioritized among different guidance documents to be developed, and made decisions about the necessary level of review. Although no component can insulate itself completely from risks, it can manage risk by involving management in decisions to initiate guidance, prioritizing among proposed guidance, and determining the appropriate level of review prior to issuance. According to internal control standards, agencies should use techniques and processes to identify and manage risk. Agencies face multiple risks when going through the guidance production process. Risks include legal challenges that issued guidance is asserting binding requirements without having gone through the rulemaking process, or that a guidance document goes beyond the agency’s statutory authority. In addition, if leadership is not included in discussions related to initiation of guidance, agencies risk expending resources developing guidance that is unnecessary or inadvisable. At a few components, officials told us that leadership (such as component heads and department-level management) decided whether to initiate certain guidance and guidance did not originate from program staff for these components. Employee Benefits Security Administration office directors presented guidance proposals considering legal, policy, and programmatic factors to their Assistant Secretaries and Deputy Assistant Secretaries for approval to start developing guidance. In most other cases, ideas for additional guidance originated from program staff and field offices or from leadership, depending on the nature of the guidance. Education officials told us that component program staff and leadership work together to identify issues to address in guidance. USDA Food and Nutrition Service officials told us they may decide to initiate guidance based on both input from regional offices and directives from senior leadership. In a recent example, the Under Secretary for Food, Consumer, and Nutrition Services initiated guidance to remind state agencies of existing requirements prohibiting the online sale of benefits for the Special Supplemental Nutrition Program for Women, Infants, and Children. Officials at several components indicated that they prioritized the development of certain guidance documents over others to ensure that guidance expediently responded, for example, to constituents with pressing needs to comply with current regulations and that staff resources were allocated in alignment with these priorities. Education officials told us they prioritized among possible guidance documents based upon conversations about staffing resources as well as the needs of their constituents. For example, Education’s Office of Management officials told us they used an ongoing list to prioritize certain guidance documents, considering factors including (1) whether the guidance document explained new statutory requirements, (2) whether it responded to questions from their constituents, (3) the importance of the guidance to other Education programs, or (4) whether the proposed guidance addressed issues identified through technical assistance or calls to their compliance hotline. HHS Administration for Children and Families’ (ACF) Office of Child Care officials explained that they prioritized guidance in areas that (1) had the largest positive impacts; (2) needed clarification to address questions from grantees, stakeholders, and others; and (3) were most relevant to the program’s rules or requirements. At most components, officials told us that they determine the appropriate level of review and final clearance of proposed guidance, and in many cases guidance was reviewed at a higher level if the document was anticipated to affect other offices or had a particular subject or scope. Risk was one factor agency officials considered when determining the anticipated appropriate level of review and final clearance of proposed guidance. At the Employee Benefits Security Administration, for example, the need for department-level clearance depended on various factors. These included likely congressional interest, potential effects on areas regulated by other DOL components, expected media coverage, and whether the guidance was likely to be seen as controversial by constituent groups. Two other factors that a few agencies reported considering in determining whether the guidance warranted a higher level of review were whether it was related to a major priority or would be “impactful.” Most components did not have written procedures for guidance initiation, development, and review. Control activities (such as written procedures) help ensure that actions are taken to address risks and enforce management’s directives. In the absence of written procedures, components relied on officials’ understanding of the guidance process, including when certain guidance documents should have been reviewed by leadership and when it was unnecessary to have that review. In these cases, officials told us that the guidance process was well understood by program staff or followed typical management hierarchies. For example, HHS’s Administration for Community Living, DOL’s Office of Labor- Management Standards and Education’s Office of Special Education and Rehabilitative Services officials told us program staff have a good understanding of the processes involved in developing and obtaining approval of guidance. Control activities may help components assure that management officials approve of the content of the guidance and concur on the document’s relative priority. Control activities outlined in written procedures can provide a central approach to guidance initiation, development, and review. A total of 6 of the 25 components had written procedures for the entire guidance production process, and several of these components highlighted benefits of these procedures for their guidance processes. These components included ACF’s Office of Head Start and five components at DOL: the Occupational Safety and Health Administration (OSHA), the Mine Safety and Health Administration, the Employment and Training Administration, the Office of Federal Contract Compliance Programs, and the Bureau of Labor Statistics. Education’s Office of Innovation and Improvement and Office of Elementary and Secondary Education and Labor’s Veterans’ Employment and Training Service had written procedures only for the review and clearance phase. The Mine Safety and Health Administration’s written procedures contain information that they describe as essential to the effective and consistent administration of the component’s programs and activities. As shown in figure 3, OSHA’s written procedures are designed to ensure that the program director manages the process for a specific policy document by considering feedback and obtaining appropriate concurrence to ensure that guidance incorporates all comments and has been cleared by appropriate officials. The Deputy Assistant Secretary resolves any disagreements about substance, potential policy implications, or assigned priority of the document. Documented procedures are not just an internal controls issue; agencies benefit from them. OSHA’s procedures were meant to ensure effective management of the issuance of guidance documents. Internal control standards do not prescribe either centralization or decentralization for managing guidance processes, and the departments we reviewed had varied approaches. One department, Education, had centralized processes for guidance development, review, and dissemination, while the other three departments were decentralized. At Education, an Office of the General Counsel official told us she was involved in decisions about whether guidance is considered significant and Education’s Office of the Executive Secretariat managed the document clearance and approval processes for all guidance. This office also maintained frequently asked questions to explain the process to components. In contrast, officials told us that DOL gave its components the flexibility to develop individual procedures for developing and issuing non-significant guidance. In addition, HHS departmental officials told us they played a secondary review and approval role and that each HHS component approached the development and issuance of guidance documents differently. USDA officials told us that its guidance process was also decentralized, as guidance was typically initiated, developed, and approved at the program level, while significant guidance was shared with the department for review. Although a few components had written procedures for guidance initiation, development, and review, officials from all components could describe standard review practices to provide management the opportunity to comment and ensure that its comments were addressed by program staff. For example, the Administration for Community Living had its officials circulate draft guidance for internal review and typically required three to four officials to sign off on the draft, including center directors and its Executive Secretariat. At Education’s Office of Innovation and Improvement, program staff shared draft guidance with senior leadership, who in turn provided feedback. Once senior leadership officials and program staff were satisfied with and approved the document, it was sent to the Office of the Executive Secretariat to be placed into clearance. Most selected components had guidance practices to ensure either intra- agency and interagency review (or both) of guidance documents before issuance. Internal controls require (1) that information is recorded and communicated to management and others, and (2) the components have an adequate means of communicating with and obtaining information from external stakeholders that may significantly affect the component’s ability to achieve its goals. Intra-agency communications. To ensure that management concurrence was recorded, most components we reviewed used communication tools, such as electronic or hard-copy routing slips, to document approval for guidance clearance or to communicate with management and other offices about proposed or upcoming guidance. In particular, officials at 20 components used a routing slip to document management concurrence. For example, the Mine Safety and Health Administration used two forms to track the clearance of guidance documents. Education’s Office of Management used a routing slip to document internal component approvals and convened a working group to resolve comments and edits on the guidance documents. The two components within ACF used a “policy calendar,” a tool for communicating with management about the guidance documents being drafted and their projected issuance dates. ACF’s policy calendar listed the name and status of guidance, whether it was a presidential or secretarial priority, whether the affected program was mandatory or discretionary, and the proposed date of issuance to alert appropriate ACF officials of upcoming guidance and to facilitate appropriate review. Interagency communications. Most component officials told us that they conferred with other affected components or federal departments to ensure consistency of their guidance during the development of guidance. Officials at Education’s Office for Civil Rights told us they sometimes reached out to other federal agencies and interested stakeholders to have “listening sessions” on new guidance documents, such as a 2013 pamphlet on academic success for pregnant and parenting students. DOL’s Office of Disability and Employment Policy officials told us they often worked with multiple departments that addressed disability issues and contributed to the fact sheets or other guidance documents issued by these departments. External stakeholders. Officials told us that feedback from external nonfederal stakeholders often served as the impetus for the initiation of guidance, and 15 of the 25 selected components cited examples in which they conferred with external nonfederal stakeholders during the guidance development process. At OSHA, external stakeholders were not involved in developing directives or policy issuances, but assisted with developing educational, non-policy guidance, such as hazard alerts. Food and Nutrition Service officials told us that state and local agency staff, industry representatives, advocacy organizations, and the general public were involved in the development of their guidance, generally through a comment period. For example, during the development of a policy memorandum on vendor management, the Special Supplemental Nutrition Program for Women, Infants, and Children solicited comments from regional offices and all state agencies through a 45-day comment period. Although some components did not have a formalized process to assess the effectiveness of their guidance, many of these components told us they have updated or revised certain guidance documents. According to internal control standards, agencies benefit from procedures to continually reassess and improve guidance processes and documents to respond to the concerns of regulated entities. In the absence of monitoring and evaluation strategies, components cannot assess whether guidance meets intended goals or whether they need to provide additional guidance to supplement and improve upon prior guidance. Nearly half of the components we reviewed (11 of the 25) did not regularly evaluate whether issued guidance was effective and up to date. Without a regular review of issued guidance, components can miss the opportunity to improve their guidance. DOL’s Office of Labor- Management Standards officials told us they had not evaluated the relative success of old guidance and did not often revise guidance. ACF’s Office of Child Care regularly tracked and updated guidance on grantee reporting requirements. However, officials said there was little need to track or update other guidance, as it had been 18 years since its authorizing statute was changed. However, with the recent passage of the Child Care and Development Block Grant reauthorization, officials said they intended to assess their old guidance and update it to reflect the new law. GAO, Fair Labor Standards Act: The Department of Labor Should Adopt a More Systematic Approach to Developing Its Guidance. GAO-14-69 (Washington, D.C.: Dec. 18. 2013). systematic effort to evaluate guidance. For example, DOL’s Wage and Hour officials updated guidance to reflect new standards during their 2010 regulatory initiative on the temporary agricultural employment of H-2A immigrant workers. A few selected components had initiated or established a process for tracking and evaluating guidance to identify necessary revisions. For example, in November 2011, DOL’s Office of Federal Contract Compliance Programs officials initiated a 2-year project to review their directives system to ensure that they only posted up-to-date guidance. As a result of the project, in 2012 and 2013 officials identified necessary updates to guidance, clarified superseded guidance, and rescinded guidance where appropriate. Officials told us that these actions reduced the original number of directives by 85 percent. Officials also told us that they did this to ensure that their guidance was more accurate and correct, and the actions resulted in officials posting only relevant and current guidance information on the component’s website. Officials told us they now routinely monitor their directives about once a year and review other guidance documents each time they issue new regulations or change a policy to decide if they need to revise them. The Employment and Training Administration used a checklist to review a list of active guidance documents and identified whether to continue, cancel, or rescind the guidance. In addition, officials indicated which documents were no longer active on their website. The Mine Safety and Health Administration also ensured that programs periodically reviewed and updated guidance documents and canceled certain guidance. All components told us they relied primarily on their websites to disseminate guidance but also used many other dissemination methods. As shown in figure 4, the components in our review used various strategies to distribute guidance to the public. While all agencies posted guidance online, a few components also made documents available to specific audiences on intranet websites. For example, USDA’s Food and Nutrition Service officials told us that they posted operational guidance on upcoming or proposed regulations to their PartnerWeb, an intranet site that is only accessible by state agencies. One component, DOL’s Bureau of Labor Statistics, e-mailed guidance to state agencies and posted it on an intranet site for state agencies that is not publicly accessible. Components also relied on other government websites to distribute guidance. For example, Education used ADA.gov to jointly issue guidance related to disability discrimination with the Department of Justice and Stopbullying.gov to publicize guidance related to antibullying laws and policies. Components also designed specialized websites to disseminate guidance on specialized topics. For example, DOL’s Office of Disability Employment Policy posted information about disability programs on disability.gov and Education’s Federal Student Aid used a separate websites to serve different audiences. Almost all components used e-mail as another key dissemination method. Components also used listservs (which manage e-mails to and from a list of subscribers), e-mail delivery service (such as GovDelivery) or newsletters. Officials told us they compiled listservs of individuals interested in specific issues. They also explained that these lists were developed in a number of ways, including by program offices that add interested parties or directly from members of the public who sign up to be on these lists through component websites. These listservs could be very large. For example, DOL’s Employee Benefits Security Administration list has 336,000 subscribers. Recognizing the importance of listservs as a dissemination method, officials at several components told us they periodically verify and update their e-mail lists. Components also used other methods to disseminate guidance. Some held press conferences or issued press releases, while others distributed and discussed guidance during conferences, webinars, or conference calls. Components also reported using social media, such as Facebook, Twitter, or blogs. A few components told us that they posted guidance in the Federal Register. Lastly, component officials said that external partners—such as states, advocacy groups, and trade associations— sometimes distributed guidance for them at their request. Officials used different strategies to reach certain groups and noted that it was more resource intensive to distribute guidance to a wider audience. For example, officials from HHS’s Administration for Community Living explained that because their subgrantees are defined in statute, they were able to effectively target their guidance to that group. Similarly, Education’s Office for Civil Rights officials had readily available e-mail lists for the purpose of sending guidance to all public school superintendents or college presidents. DOL Employee Benefits Security Administration officials noted that disseminating guidance to financial institutions was fairly easy because that audience was receptive to receiving information through their website and generally vocal when they were unable to find the information they were seeking. On the other hand, OSHA officials told us they use social media to communicate with hard- to-reach populations, such as non-English speakers and temporary/contract workers who were more likely to be working in dangerous jobs, and used hard-copy guidance during disaster recovery efforts or to reach those who did not have access to the Internet. Officials noted that states and stakeholder groups were helpful in reaching wide audiences, especially when disseminating guidance to large groups nationwide, such as parents or students and all employers or employees. Components also reached wider audiences by engaging with the public directly through conferences, webinars, media outreach, or public awareness campaigns. Our ability to access and find significant and non-significant guidance online varied. We found that Education, USDA, and DOL consistently applied OMB Bulletin requirements for public access and feedback for significant guidance while HHS did not. HHS’s website did not link to significant guidance documents. In addition, we were unable to find these documents by searching the department’s website. HHS officials could not explain why these documents were not posted on its website. Because components rely on their websites to disseminate guidance, it is important that they generally follow requirements and guidelines for online dissemination. For significant guidance, agencies are required by the OMB Bulletin to maintain a current list of their significant guidance on their websites. Agencies must also provide a means for the public to submit comments on significant guidance through their websites. Without providing the public an easy way to access significant guidance, agencies cannot ensure that the public can know about or provide feedback on these documents. an agency’s website to the electronic list posted on a component website—a current list of its significant guidance documents in effect. The list shall include the name of each significant guidance document, any document identification number, and issuance and revision dates. The agency shall provide a link from the current list to each significant guidance document that is in effect. The list shall identify significant guidance documents that have been added, revised or withdrawn in the past year. While the OMB Bulletin does not have requirements for agencies related to the online dissemination of non-significant guidance, there are several resources agencies can use to improve how they post and update those documents. One such resource is the Guidelines for Improving Digital Services developed by the federal Digital Services Advisory Group. These guidelines are aimed at helping federal agencies improve their communications and interactions with customers through websites (see table 3). website a means for the public to submit comments electronically on significant guidance documents, and to submit a request electronically for issuance, reconsideration, modification, or rescission of significant guidance documents. Public comments under these procedures are for the benefit of the agency. No formal response to comments by the agency is required. Made guidance easily accessible from component home pages. All components linked key guidance documents on their websites so that guidance could be easily found. We were able to navigate from the homepage to the guidance itself in just a few clicks for all the websites we reviewed. All components also used common terms for guidance—including publications, resources, policy, grant guidance, fact sheets, memorandums, and reports—to help users identify those documents. Components used these terms to create links or menus to facilitate users’ ability to find guidance. : HHS reported using its department-wide governance structure for developing and delivering digital services. Governance, policies, and standards: Education established guiding principles to reinforce a governance structure for developing and delivering digital services and managing data. Cross-agency collaboration and shared services and tools website to showcase digital strategy best practices and to test new technology and tools. Technical considerations: that it was modernizing its technical infrastructure by adhering to business requirements and technical trends, such as increased use of and support for mobile devices. Usability and accessibility: target for all its websites and digital content to become accessible and compliant with Section 508—which requires that federal electronic and information technology are accessible to people with disabilities—by May 31, 2013. As of September 2013, the Administration for Children and Families website was 92 percent compliant and the Administration for Community Living’s website was 99 percent compliant. Privacy and security: that the information it collects is protected by the privacy and confidentiality provision of federal statutes, including the Family Educational Rights and Privacy Act, the Individuals with Disabilities Education Act, the Education Sciences Reform Act, and the Privacy Act of 1974. It has also implemented a disclosure review process and established a review board to ensure that data are reviewed and approved before they are publicly released. Improved search. We found that most components had search tools on their websites that generally functioned well. Searches are a key way that users access guidance. A number of components had taken steps to improve their website searches. These included adding meta tags to the code on their pages so that the most relevant content appeared higher in the results of external searches and adding a wider range of keywords to internal search engines to improve searches. Components that had not made improvements explained that this was because they used the department’s search tools and did not have the ability to make changes on their own. Highlighted new or important guidance. Components highlighted new or important guidance on their homepages to draw users’ attention to that information. For example, HHS’s Office of Child Care highlighted the passage of the Child Care and Development Block Grant Act of 2014 on its homepage by providing reauthorization resources. The website included key guidance related to the act, including program instructions, technical assistance, and trainings. Posted contact information to allow for questions and feedback from the public. Components used websites to provide contact information to the public. Specifically, components posted toll-free numbers which could facilitate the public’s ability to ask questions or provide feedback on published guidance. A few components provided direct e-mails or phone numbers for specific offices and key program staff. Opportunities for affected parties and other stakeholders to submit questions and feedback on guidance documents are important because, as discussed above, public interactions have served as the impetus for new guidance. Further, because not all components we reviewed provided examples of taking steps to solicit and respond to public comments as guidance was developed, ensuring effective mechanisms for affected parties and others to submit feedback is crucial. Categorized guidance. Components organized guidance by type, topic, date, or audience to help users sort through the sometimes long lists of guidance posted online, as shown in figure 5. Several factors hindered the ease of access to component guidance online. Components posted long lists of guidance, which could make it difficult for users to find particular guidance documents. In addition, we found that few components effectively distinguished whether their online guidance was current or outdated to ensure the relevance of their online information. As discussed earlier, we found that DOL’s Office of Labor- Management Standards did not update its website in a timely manner to reflect guidance that would be affected by finalized regulation. Clearly marking whether guidance is current is important. As previously discussed, DOL’s Office of Federal Contract Compliance Programs efforts to ensure the relevancy and currency of its directives system resulted in the removal of 85 percent of their documents. Ensuring that online content is accurate and relevant is one of the guidelines for federal digital services (see table 3 above). Easy access to current and relevant guidance could also facilitate opportunities for affected parties and stakeholders to provide feedback on those documents. Another factor that hindered public access was that it was not always clear where to find guidance on a component website. We found guidance was sometimes dispersed across multiple pages within a website, which could make guidance hard to find and could contribute to user confusion. The labeling of these links was not distinctive enough for users to know where to go for the various guidance documents or topics they may be seeking (see figure 6). These issues could be a result of the requirement that Education components use a departmental template for their websites. A few components created navigational links to supplement departmental toolbars. Education officials told us that they have learned from their grantees that the department’s guidance was not easy to find and that online resources were hard to navigate. Federal digital services guidelines direct agencies to publish digital information so that it is easy to find and access (see table 3 above). While components used web metrics to evaluate their online guidance dissemination strategy, many did not use that information to change their existing approach. Further, many component officials told us that they did not have a systematic way to evaluate whether the public could access their guidance online. Web and customer satisfaction metrics—data which allow agencies to measure performance, customer satisfaction, and engagement to make continuous improvements to serve its customers—could be a good source of this information.web metrics can inform officials about which guidance is being accessed For example, and searched. Similarly, customer satisfaction metrics could provide qualitative information about how easily users were able to find the guidance they were seeking. The thoughtful analysis and application of these data would allow components to regularly evaluate the effectiveness of disseminating guidance through their websites (see table 3 above). Further, internal controls call for the continual monitoring of results and for management to take proper actions in response to findings. All components collected web metrics. This data could help agencies evaluate its online products, which is a guideline for federal digital services (see table 3 above). Every department in our review used Google Analytics to collect website performance data. Many components also used web metrics to evaluate the effectiveness of how they disseminated guidance online. For example, USDA’s Food and Nutrition Service used web metrics to track overall use of its guidance online. In another example, HHS’s Administration for Community Living learned about ways to drive traffic to its website when new guidance is posted through its evaluation of web metrics. However, many components did not use web metrics to improve how they disseminated guidance through their websites (see table 4). Only 8 of the 24 components that made guidance publicly available online reported using web metrics to improve how they used websites to disseminate these documents. Because all components we studied primarily relied on their websites to disseminate guidance, there is an opportunity for them to build on their use of web metrics to improve how they disseminate guidance online. Doing so will also facilitate components’ efforts to evaluate its online guidance, which is also a guideline for federal digital services (see table 3 above). Components found ways to evaluate the effectiveness of guidance dissemination outside their websites. This included seeing how many applications had been viewed, downloaded, or submitted through grants.gov, and how many e-mails and newsletters were opened through GovDelivery. Components also conducted usability tests, focus groups, and surveys of users, adhering to the federal guideline calling for agencies to collect and address customer feedback (see table 3 above). Other components convened internal task teams to identify and implement changes while others hired contractors to redesign their websites. Further, officials reported receiving useful feedback directly from the public at conferences, webinars, stakeholder/grantee meetings, or from monitoring visits. For example, Education’s Federal Student Aid officials changed their search function to allow guidance to be searched by relevance and date based on feedback received during training with outreach groups. Additionally, OSHA officials are in the process of surveying subscribers of its biweekly e-newsletter to seek feedback and improvements so that they can provide useful, educational, and up-to- date information to the public. and Adult Education learned through the use of metrics that many users sought information about states and reorganized its website to make that information more prominent and easier to locate. Administration updated its pages to facilitate user access to guidance related to lead because of web metrics indicating that users were searching the topic. The component also reconfigured its publications page based on the number of downloads. It also used metrics to guide how often to reprint guidance and which ones to translate. Administration used metrics to determine how long to keep guidance on its homepages and provided navigational links to reflect the different ways visitors search its website. metrics to populate its list of trending topics and most requested pages. It also used metrics on high-traffic pages to inform decisions about where to post new guidance. Further, decisions about what materials should be translated into other languages and whether more guidance is needed on certain topics are based on web metrics. Guidance documents are an important tool that agencies use to communicate timely information about the implementation of regulatory and grant programs to regulated parties, grantees, and the general public. Guidance documents also provide agencies valuable flexibility to clarify their requirements and policies, and to address new issues and circumstances more quickly than may be possible using rulemaking. However, agencies must also exercise diligence when using guidance. Although guidance documents are not legally binding, they can affect the actions of agencies’ staffs, stakeholders and other interested parties, because guidance articulates agencies’ interpretations and policy choices. The potential effects of these documents—and the risks of legal challenges to agencies—underscore the need for consistent and well- understood processes for the development, review, dissemination, and evaluation of guidance. We found mixed compliance with requirements established by the Office of Management and Budget’s (OMB) Final Bulletin for Agency Good Guidance Practices (OMB Bulletin) for the subset of guidance documents considered “significant” under OMB’s definition. Education and USDA had written departmental procedures for the approval of significant guidance, as directed by the OMB bulletin. DOL officials had not made their procedures available to component staff to ensure consistent application of review processes for significant guidance, and those procedures required updating. HHS had no procedures for significant guidance approval. Though officials from both departments told us that they believed their components understood the OMB requirements, HHS and DOL could better ensure that their components consistently followed OMB’s requirements for significant guidance if they made departmental written procedures available. Education, USDA, and DOL consistently applied other OMB Bulletin requirements on public access and feedback for significant guidance, but HHS did not. HHS did not explain why the department had not posted a website for significant guidance online. HHS should ensure that the public can easily access and provide feedback on its significant guidance, as required by OMB. Without providing an easy way to access and comment on significant guidance, HHS cannot ensure that the public is aware of or can provide feedback on these documents. Government-wide guidance that specifically addresses processes for non-significant guidance does not exist. Non-significant guidance accounts for the bulk of components’ guidance documents. In the absence of government-wide guidance specifically targeted at non- significant guidance, internal control principles and standards provide the key criteria for components to apply to their policies and procedures. Component officials identified many practices that they use to address internal control standards regarding risk assessment, control activities, communication, and monitoring. In particular, officials at most components told us that they determine the appropriate level of review and final clearance of proposed guidance documents. However, the components less consistently identified practices to address other elements of internal controls. For example, though all components could describe standard practices for developing guidance, only 6 of the 25 components had written procedures for the entire process, and another 3 only had written procedures for the review and clearance phase. Written procedures could help components define management roles in decisions to initiate development of guidance documents, prioritize among them, and determine their appropriate level of review to manage risk. Further, not all components documented approval for guidance clearance and nearly half of them did not regularly evaluate whether issued guidance was effective and up to date. Opportunities exist for components to strengthen their internal controls. For example, components could adapt practices that others already use and have found to be an effective use of resources. Wider adoption of these practices could better ensure that components have internal controls in place to promote quality and consistency of their guidance development processes. To be effective, guidance documents must also be accessible by their intended audiences. The departments and components primarily relied on their websites to disseminate guidance. Consequently, components’ application of relevant federal guidance and best practices for web dissemination is particularly important for ensuring that the intended audiences can access and are aware of these documents. Certain component websites for disseminating guidance were easy to use—for example, because guidance was well organized or clearly marked—but others were hard to navigate or did not effectively distinguish between current and outdated guidance. Further, components did not always leverage the web and customer satisfaction metrics they collected to evaluate their guidance and its dissemination. By more consistently analyzing the metrics they have already collected, components could better ensure that their online guidance is easy to access, accurate, and relevant. Ensuring effective mechanisms for affected parties and other stakeholders to submit feedback on guidance documents is also crucial. Opportunities for feedback on issued guidance are important, not only because public comments and questions are often the impetus for components initiating new or revised guidance, but also because components we reviewed did not consistently take steps to confer with external stakeholders while guidance was being drafted and reviewed. To better ensure the adherence to requirements for approval and public access to and feedback on significant guidance in accordance with OMB’s Final Bulletin for Agency Good Guidance Practices (M-07-07), we recommend that the Secretary of HHS take the following two actions: 1. Develop written procedures for the approval of significant guidance documents. 2. Ensure that the department’s significant guidance is accessible online and that the public can provide comments on significant guidance documents. To better ensure the adherence to requirements for approval of significant guidance in accordance with OMB’s Final Bulletin for Agency Good Guidance Practices (M-07-07), we recommend that the Secretary of Labor take the following action: 1. Review and update the department’s written procedures for approval of significant guidance and make them available to appropriate component staff. To improve agencies’ guidance development, review, evaluation, and dissemination processes for non-significant guidance, we recommend that the Secretaries of USDA, HHS, DOL, and Education take the following two actions: 1. Strengthen their selected components’ application of internal controls to guidance processes by adopting, as appropriate, practices developed by other departments and components, such as assessment of risk; written procedures and tools to promote the consistent implementation and communication of management directives; and ongoing monitoring efforts to ensure that guidance is being issued appropriately and has the intended effect. Examples of practices that could be adopted more widely include written procedures for guidance production to, among other things, clearly define management roles; improved communication tools, such as routing slips to document management review; and consistent and ongoing monitoring to determine if guidance is being accessed and having the intended effect. 2. Improve the usability of selected component websites to ensure that the public can easily find, access, and comment on online guidance. These improvements could be informed by the web and customer satisfaction metrics that components have collected on their websites. Some examples of changes that could facilitate public access to online guidance include improving website usability by clarifying which links contain guidance; highlighting new or important guidance; and ensuring that posted guidance is current. We provided a draft of this report to the Secretaries of Agriculture, Education, Health and Human Services, and Labor. We received written comments from Education, HHS, and DOL, which are reprinted in appendixes III, IV, and V, respectively. USDA provided oral comments. In addition, Education, DOL, and USDA provided technical comments, which we incorporated as appropriate. We also shared a copy of the report with the Office of Management and Budget, and incorporated its technical comments as appropriate. Education concurred with our recommendations. Education stated that, although it believes that its internal controls for developing and producing guidance are effective and that its online guidance can be easily accessed by the public, it is committed to continuously looking for opportunities to improve its processes. Education stated that it will review components’ procedures for guidance development and production and develop and provide to its components standard protocols they can use to clarify management roles, document management review and approval of guidance, and review posted guidance to ensure it is current and accessible to the public. In addition, Education will review the presentation of guidance on Education’s and its components’ websites and identify best practices to improve the online presentation and accessibility of guidance documents. HHS concurred with our recommendations. While HHS pointed to its established practices for developing and internally reviewing significant guidance, it stated that it would explore the best mechanism for distributing written procedures for approval of these documents. HHS noted that it regularly engages with the public and regulatory stakeholders to receive feedback and distributes its guidance in accordance with this feedback, but will work with its agencies to update links to published guidance and explore ways to make published guidance easier to find on HHS webpages. In response to our recommendation on internal controls, HHS stated that it will continue to work with its subagencies to share best practices and ensure that agency practices are aligned with departmental standards. HHS concurred with our recommendation on improving website usability and stated that it will review current links to guidance documents and explore ways to enhance their visibility and usability. Labor concurred with our recommendations. Labor stated that it will update the department’s written procedures for the approval of significant guidance, disseminate them to component agencies, and ensure they are easily accessible. In addition, Labor stated that it will work with component agencies to share best practices and promote more consistent application of internal control standards in the guidance production process and encourage agencies to consider website improvements and better use web metrics to ensure access and public comments on guidance. On March 19, 2015, USDA officials representing the Food and Nutrition Service and the Department’s Office of Budget and Program Analysis provided oral comments on the report. USDA generally concurred with our recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Agriculture, Education, Health and Human Services, and Labor, and other interested parties. We are also sending copies of this report to the appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. You asked us to examine guidance processes at the four departments under your jurisdiction. For this report, we reviewed how the Departments of Agriculture, Education, Health and Human Services, and Labor and selected subagencies or components at these four departments (1) use guidance and the processes and criteria they use to determine whether to issue guidance or undertake rulemaking; (2) follow applicable criteria and leading practices in their policies, procedures, and practices for producing guidance; and (3) ensure they follow dissemination requirements and facilitate end users’ access to and comment on documents. We limited our review to guidance agencies provide to external parties. The scope of review included the 25 selected subagencies, or components, in the four selected departments that (1) were within the requesting committee’s jurisdiction, and (2) engaged in regulatory or grantmaking activities, as the components engaged in these activities were likely to issue guidance interpreting regulations or other requirements to external parties (in contrast to agencies that issue only informational guidance or other resources). To identify these components, we searched in the Federal Register and the Catalogue of Federal Domestic Assistance to confirm regulatory and grantmaking activities. We confirmed the resulting list of identified components with department officials. See table 5 below for the full list of audited department components. We determined that computer-processed data were not expected to materially affect our findings, conclusions, or recommendations, thus rendering a data reliability assessment unnecessary. We used computer-processed data solely to confirm that the departments and components within our scope engaged in regulatory or grant-making activities. To describe how selected department and component processes used guidance and the processes and criteria they used to determine whether to issue guidance or undertake rulemaking, we reviewed agency written procedures, guidance documents, and websites. We also interviewed department and component officials on guidance practices.themes and examples from our documentary and testimonial evidence for all objectives, we analyzed information from relevant documents and interviews to identify and confirm common patterns as well as differences across selected agencies. To evaluate whether agency policies, procedures, and practices for producing guidance followed applicable criteria and leading practices, we assessed the extent to which agencies adhered to requirements for written procedures for approval of significant guidance under the Office of Management and Budget’s (OMB) Final Bulletin for Agency Good Guidance Practices (OMB Bulletin). To do so, we reviewed applicable agency written procedures and agency websites and spoke to officials about their practices for development and review of significant guidance. Office of Management and Budget, Final Bulletin for Agency Good Guidance Practices, 72 Fed. Reg. 3432 (Jan. 25, 2007). authority, addressed in our discussion of the level of review of guidance, are reflected in our application of the other internal controls. To ensure we applied selected internal controls to guidance processes appropriately, we reviewed applicable literature and spoke to OMB staff and legal scholars identified due to their published work on the subject. In addition, we spoke with officials at the Food and Drug Administration (FDA) to gain a better understanding of their related statutory requirements for guidance. We spoke with FDA because certain provisions of the OMB Bulletin were informed by written FDA practices for the initiation, development, issuance, and use of their guidance documents. To evaluate guidance dissemination strategies, we assessed the extent to which departments adhered to OMB’s public feedback and comment requirements; reviewed agency websites and digital government strategy reports; evaluated written statements from officials on components’ use of web and customer satisfaction metrics; and interviewed relevant agency officials. We used Guidelines for Improving Digital Services developed under the President’s digital government strategy to assess the usability of component websites for accessing guidance documents. We conducted this performance audit from March 2014 to April 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix describes guidance processes at the 25 components we reviewed. Agency profiles include the following information: Component Mission. In almost all cases we used information from the 2013 United States Government Manual. Target Audience and Dissemination Methods. Component officials identified their intended audience and dissemination methods. We did not corroborate this testimonial evidence. Yes/No Questions. We asked the agency for evidence of these practices and reported on our analysis based on this support. Agency Use of Guidance. We relied on component officials to explain the types of guidance they use most often and how they use these types of guidance. In most cases, we corroborated this by reviewing agency websites. Guidance Processes (Initiation, Development, Review, Dissemination, and Feedback on Guidance and Dissemination). We provided information from component interviews about processes for initiation, development, review, dissemination, and tracking and evaluation. We did not corroborate this testimonial evidence. In cases in which components referred to written procedures, tracking sheets, or formalized review processes, we requested and reviewed relevant documentation. Highlighted Practices. In certain cases, we presented practices identified by components that (1) applied or strengthened identified internal controls and (2) could be helpful to other components if similarly adopted. These practices are not exhaustive, but rather are meant to illustrate useful practices. Food and Nutrition Service (FNS) Overview What FNS Does FNS administers the Department of Agriculture’s (USDA) domestic food assistance programs. Target Audiences Agency officials told us the guidance issued by FNS was targeted to state agencies, local agencies, and other partner organizations to assist them with policy implementation and compliance with regulations. 3. General Guidance: Covered such topics as eligibility, program management, breastfeeding, and nutrition. Initiation, Development, Review, and Dissemination Initiation: FNS officials told us that guidance was typically initiated and drafted at headquarters by program staff. Guidance was often used to interpret regulations. Officials at the FNS Special Supplemental Nutrition Program for Women, Infants and Children (WIC) told us that they issued guidance in response to the release of new regulatory and legislative provisions or regulations issued by another agency that may have affected WIC programs. They also issued guidance in response to input from FNS regional offices as well as questions and other input from WIC state agencies to clarify requirements and expectations for program operations and directives from senior management, or in response to Inspector General or GAO audits. Dissemination methods FNS website, hard copy (pre-2004 guidance), PartnerWeb internal website, and e-mails to regional offices. Development and Review: FNS officials told us that after guidance was drafted and depending on the nature of the guidance, agencies other than the Department of Agriculture (USDA) FNS may be consulted or contacted as appropriate. Guidance was cleared by the Deputy Administrator for Supplemental Nutrition and Safety Programs and Office of General Counsel. FNS obtained input from General Counsel to ensure that it did not overstep its legal authority as provided under the program’s governing legislation. Guidance was cleared by the Deputy Administrator, the Administrator, or the Undersecretary if it contained sensitive issues of interest to management or had broader implications. Guidance documents that contained specific, extensive requirements for state agencies administering the relevant program went through the same review process as for rulemaking, going first to the division director and then to the Chief of the Division of Planning and Regulatory Affairs for clearance. The clearance process was tracked within the agency using an e-routing system. FNS used a routing slip that accompanied all guidance documents for clearance through the Administrator’s Office. Food and Nutrition Service (FNS) Initiation, Development, Review, and Dissemination, cont. website. Afterwards, the officials notified state offices. FNS WIC officials told us that sometimes guidance documents were e-mailed to offices that did not have consistent access to the website, such as the Indian Health Service offices. FNS also sent out notices if information published in the Federal Register affected state program implementation. USDA officials told us that because their programs were high profile, they got a lot of public comments. They added that there was no routine process for reviewing public comments on guidance received through USDA’s website. FNS also received feedback from state and local agencies through its regional offices, which had a program-specific point of contact for the agencies. FNS officials told us that going through the regional offices was the most effective way to receive feedback from state agencies (i.e. the end users), as the regional offices worked with the states doing day-to-day technical assistance and monitoring and evaluation. State agencies contacted their regional offices when they had difficulty accessing the web site or had questions about issued guidance and policies. FNS officials told us this type of feedback served as a reference point for future documents. FNS also occasionally issued optional surveys or asked regional offices to poll state agencies. Supplemental Nutrition Assistance Program: Enhanced Detection Tools and Reporting Could Improve Efforts to Combat Recipient Fraud. GAO- 14-641. Washington, D.C.: August 21, 2014. School-Meals Programs: USDA Has Enhanced Controls, but Additional Verification Could Help Ensure Legitimate Program Access. GAO-14-262. Washington, D.C.: May 15, 2014. School Lunch: Implementing Nutrition Changes Was Challenging and Clarification of Oversight Requirements Is Needed. GAO-14-104. Washington, D.C.: January 28, 2014. School Lunch: Modifications Needed to Some of the New Nutrition Standards. GAO-13-708T. Washington, D.C.: June 27, 2013. WIC Program: Improved Oversight of Income Eligibility Determination Needed. GAO-13-290. Washington, D.C.: February 28, 2013. Office for Civil Rights (OCR) Overview What OCR Does OCR’s mission is to ensure equal access to education and to promote educational excellence through enforcement of civil rights. It serves student populations facing discrimination and the advocates and institutions promoting systemic solutions to civil rights problems. OCR also provides technical assistance to help institutions achieve voluntary compliance with the civil rights laws that OCR enforces. Agency Use of Guidance 1. Dear Colleague Letters and Frequently Asked Questions: Covered topics related to OCR-enforced regulations. 2. Pamphlets: Offered general informational guidance directed toward a wider audience, typically students and parents. Initiation, Development, Review, and Dissemination Initiation: OCR relied on field offices, other programs, and the public for ideas for new guidance. OCR used its Program Legal Group (PLG) to collect and assess the need for new guidance. PLG submitted guidance ideas to OCR leadership for approval. Some factors considered to determine whether to issue guidance included: (1) the number of people who would be affected; (2) the need for guidance; and (3) how resource intensive guidance development would be, such as the availability of technical expertise or amount of collaboration needed. School officials, parents, students, and the general public. Development and Review: OCR informally involved external stakeholders, such as associations representing educational institutions and officials, students, and civil rights advocates, and has used listening sessions to get their views on draft guidance. However, it did not share drafts externally until they were finalized. OCR followed the department’s clearance processes for all of its guidance. Review decisions documented? Yes ✔ No Dissemination methods Website, e-mails, listservs, conferences, webinars, press conferences/releases, social media, and external partners. Dissemination: OCR disseminated guidance through its website, press releases, e-mail listservs, and social media. OCR has scheduled calls with press and stakeholders as necessary to announce the public release of guidance. For guidance documents written for school officials, OCR has created and distributed a less technical fact sheet to describe the guidance for parents and school staff. OCR staff also disseminated and answered questions about guidance during conferences, webinars, and through the provision of technical assistance. Feedback on Guidance and Dissemination OCR officials monitored (1) statistics on the number of clicks on e-mailed links, (2) the number of website hits, and (3) mentions of guidance in the news and specialized publications. OCR also distributed feedback forms on its guidance after technical assistance sessions and webinars. Examples of Recent Guidance-Related GAO Reports Child Welfare: Federal Agencies Can Better Support State Efforts to Prevent and Respond to Sexual Abuse by School Personnel, GAO-14-42 (Washington, D.C.: January 27, 2014). Charter Schools: Additional Federal Attention Needed to Help Protect Access for Students with Disabilities, GAO-12-543 (Washington, D.C.: June 7, 2012). Students with Disabilities: More Information and Guidance Could Improve Opportunities in Physical Education and Athletics, GAO-10-519 (Washington, D.C.: June 23, 2010). Office of Career, Technical, and Adult Education (OCTAE) Overview What OCTAE Does OCTAE administers and coordinates programs that are related to adult education and literacy, career and technical education, and community colleges that enable adults to acquire the basic literacy skills necessary to function in today’s society. OCTAE also helps students acquire academic and technical skills and prepare for high-skill, high-wage, and high-demand occupations in the 21st-century global economy. OCTAE provides national leadership and works to strengthen the role of community colleges in expanding access to postsecondary education for youth and adults and in advancing workforce development. Agency Use of Guidance 1. Frequently Asked Questions: Guidance relevant to OCTAE’s two formula grant programs. 2. State Plan Guidance: Information provided to states on the requirements for plans that must be submitted before a state can access federal funding under certain programs. 3. Reports and Resources: Fact sheets that highlight efforts currently underway that support OCTAE’s top priorities, and reports to Congress on state performance under the Carl D. Perkins Vocational and Technical Education Act and the Adult Education and Family Literacy Act. OCTAE also published research and data. Target Audiences The primary audience for guidance is states that are grantees of OCTAE’s formula and discretionary grant programs. Written guidance review policy? same departmental process for significant and non-significant guidance. Program officials consulted with policy staff, the Office of the General Counsel (OGC), and the Office of Planning, Evaluation, and Policy Development on guidance development. The draft guidance then went to OCTAE’s executive officer, the Chief of Staff, and the Deputy Assistant Secretary for review. The Office of the Assistant Secretary cleared the guidance and consulted with the OGC to determine if the guidance was significant. Review decisions documented? Yes ✔ No Dissemination: OCTAE sent guidance directly to state directors via OCTAE’s listserv. They also posted guidance to OCTAE’s website and its blog. OCTAE also included links to new guidance in its newsletter. OCTAE officials tracked and evaluated users’ awareness and understanding of existing guidance through interactions with grantees including monitoring visits, regular telephone communications, audits, and meetings and conferences. Used web metrics to change online guidance dissemination? Office of Elementary and Secondary Education (OESE) Overview What OESE Does OESE directs, coordinates, and formulates policy relating to early childhood, elementary, and secondary education. OESE also focuses on supporting states, school districts, and schools in improving K–12 education; providing children with language and cognitive development, early reading, and other readiness skills; and improving the quality of teachers and other instructional staff. Agency Use of Guidance 1. Frequently Asked Questions (FAQ): Issued to states and school districts to answer questions spanning multiple programs. 2. Dear Colleague Letters: Addressed issues concerning a particular program or multiple programs outlining new policies. Initiation, Development, Review, and Dissemination Initiation: OESE officials said that they often produced guidance in response to grantee inquiries or to questions from stakeholders. Further, OESE may receive feedback from quarterly meetings with grantees and technical advisors which result in the development of guidance. Target Audiences State and local educational agencies, school administrators, teachers and support staff, parents, and the general public. Development and Review: OESE officials told us that they follow the departmental guidance on determining whether other guidance should be categorized as significant or non-significant. Officials said that they have formed a working group to draft and review significant guidance that included the Office of the General Counsel and staff from OESE and other components. OESE subcomponents were responsible for the drafting of non-significant guidance. From there, the review and approval process differed depending on the type and content of the guidance. OESE has developed an internal guide to advise staff on who should be approving different types of guidance and the time frames required for that review. See the highlighted practices section on the next page for more information. Dissemination methods OESE website, e-mails, webinars, and newsletters. Dissemination: Officials said that OESE typically posted guidance on its website and issued a message from either the Assistant Secretary or the Director of the issuing office to grantees or state contacts stating that the new guidance was available. OESE also held webinars and frequently communicated with national associations to provide information about the guidance. In addition, OESE has highlighted new guidance in the department’s “ED Review” newsletter. Finally, OESE typically held quarterly calls with grantees and technical assistance advisors who suggested possible improvements to guidance on grant program implementation. Feedback on Guidance and Dissemination Officials noted that OESE did not formally track how it disseminates guidance although some offices within the component may have their own tracking systems. OESE officials said that they met with national associations each month to discuss clarifications and technical issues. OESE sought feedback from grantees about their experiences with receiving guidance and have used those responses to improve dissemination strategies. Office of Elementary and Secondary Education (OESE) Highlighted Practice In addition to using a routing slip to track the clearance of draft guidance, OESE has developed an internal guide to advise staff on who should be approving different types of guidance and the time frames required for that review. For example, this document calls for a 3-to-5 business day window for the Office of the Secretary to approve nonregulatory guidance and FAQs. There is also an internal contact identified who is responsible for coordinating clearances within the Office of the Assistant Secretary. Feedback on Guidance and Dissemination, cont. OESE officials also noted that they received frequent and helpful feedback on guidance directly from the public through e-mails and phone calls. A point of contact is identified on each piece of guidance to whom feedback could be submitted. Additionally, OESE staff regularly informed departmental leadership and policy officials of the need to revise guidance when gaps and inconsistencies were identified. Federal Student Aid (FSA) Overview What FSA Does FSA partners with postsecondary schools and financial institutions to deliver programs and services that help students finance their education beyond high school. This includes administering postsecondary student financial assistance programs authorized under Title IV of the Higher Education Act of 1965, as amended. Agency Use of Guidance 1. Electronic Announcements: Announced administrative information. 2. Operational and Policy Clarifications: Clarified issues such as reporting requirements. Initiation: FSA officials told us that questions from schools, program partners, and the public often serve as the impetus for FSA guidance. FSA officials met with OPE staff to discuss how to address questions that have been raised and whether it was appropriate to issue guidance. FSA took the lead on developing guidance that was operational in nature, while OPE was the lead if the guidance was policy related. Guidance is typically developed in a collaborative manner and often includes Education’s Office of the General Counsel. Target Audiences The primary audience for FSA guidance is students, parents, borrowers, financial institutions, counselors, mentors, schools, and institutions of higher education that disburse Direct Loans and other federal aid authorized under Title IV. Other audiences include postsecondary education associations and interest groups, lenders and guarantors, federal and state agencies, legal rights advocates, and Title IV program partners. Development and Review: While operational guidance may be signed by managers, officials told us that any guidance of importance was reviewed at all levels. FSA officials told us they do not generally work with external stakeholders during the development of electronic announcements. Dissemination: According to FSA officials, FSA’s primary mechanism for disseminating guidance was through the Information for Financial Aid Professionals website at http://www.ifap.ed.gov/ifap/index.jsp. Officials said all finalized guidance was posted on this website, which featured a “What’s New” page on which guidance issued in the past 14 days was posted. FSA also used e-mail blasts and newsletters to send out new guidance and disseminated guidance during trainings and webinars. According to FSA officials, FSA managers and staff have considerable interaction with program participants, program partners, and recipients. FSA receives a significant amount of feedback informally. Feedback also came through FSA customer care centers. Websites, e-mail blasts, webinars, and newsletters. Managing for Results: Selected Agencies Need to Take Additional Efforts to Improve Customer Service. GAO-15-84. Washington, D.C.: October 24, 2014. Used web metrics to change online guidance dissemination? Department of Education: Improved Oversight and Controls Could Help Education Better Respond to Evolving Priorities. GAO-11-194. Washington, D.C.: February 10, 2011. Higher Education: Stronger Federal Oversight Needed to Enforce Ban on Incentive Payments to School Recruiters. GAO-11-10. Washington, D.C.: October 7, 2010. Office of Innovation and Improvement (OII) Overview What OII Does OII oversees competitive grant programs that support innovations in the educational system and disseminates lessons learned. OII administers, coordinates, and recommends programs and policy for improving the quality of activities designed to support and test innovations throughout the K–12 system in areas such as parental choice, teacher quality, use of technology in education, and arts in education. Agency Use of Guidance 1. Policy Guidance and Policy Letters: Intended to help state and local educational agencies, educational service agencies, consortia of these agencies, nonprofit organizations or institutions of higher education that receive federal financial assistance to fulfill their obligations under specific federal laws and regulations. 2. Publications: Included guides, booklets, fact sheets, and brochures on a variety of topics, such as innovative school models, promising practices, school choice, private education, and supplemental educational services. 3. Frequently Asked Questions (FAQ): Related to grant competitions and new statutory requirements. Target Audiences The audience for OII guidance includes parents, administrators, teachers, students, and grantees/applicants. Initiation, Development, Review, and Dissemination Initiation: OII officials told us that grant competitions were the impetus for most guidance it issues. In addition, questions from external stakeholders have led OII officials to initiate new guidance or use FAQs to quickly respond to questions. When deciding whether to issue a rule or guidance, officials noted working with the Department of Education’s (Education) Office of the General Counsel (GC). However, if OII made legally binding programmatic changes, officials would use the rulemaking process. Officials also stated that they worked with GC on all guidance to ensure they were providing clear and accurate information to their grantees and to the field. Website, e-mail blasts, listserv announcements, and press releases. Used web metrics to evaluate online guidance dissemination? Yes No ✔ Used web metrics to change online guidance dissemination? Development and Review: Program staff were responsible for drafting guidance in consultation with GC and budget officers. Drafts were shared with the office director and submitted to the OII Executive Office where they were reviewed for clarity and consistency with other initiatives. The draft guidance was then reviewed by the Assistant Deputy Secretary, Associate Assistant Deputy Secretary, and Executive Officer who then provided feedback to program staff to make required revisions. OII officials said that they did not use a routing slip to document concurrence on draft guidance. Once approved, the guidance was sent to the Office of the Executive Secretariat in the Office of the Secretary where it was finalized (see figure 7). This process was also used for the development and approval of significant guidance. For OII’s significant guidance, the Office of the Executive Secretariat sought reviews from relevant Education components and then GC forwarded it to OMB for review. Officials told us external stakeholders were not involved in commenting on drafts before issuance. Dissemination: OII officials told us that they posted all guidance on their website and notified the public about new guidance in a number of ways. For example, OII sent out links to the guidance to its listserv subscribers and used e-mail blasts to inform interested parties of the issuance of the guidance. Office of Innovation and Improvement (OII) Initiation, Development, Review, and Dissemination, cont. Agency Use of Guidance 1. Privacy Technical Assistance (TA): Provided TA through responses to written inquiries, e-mails, and telephone calls on the Family Educational Rights and Privacy Act (FERPA), the Protection of Pupil Rights Amendment (PPRA), and the military recruiter provisions of the Elementary and Secondary Education Act of 1965, as amended (ESEA). 2. Dear Colleague Letters: Addressed topics related to privacy matters. 3. Frequently Asked Questions: Answered questions related to privacy matters. Target Audiences School officials and parents. Initiation, Development, Review, and Dissemination Initiation: OM created new guidance based on feedback or questions from the field, and addressed statutory and regulatory amendments made to the laws it administers. OM held weekly meetings to discuss complex inquiries received from the field and to address the need for any new guidance. Review decisions documented? Yes ✔ No Website, listservs, webinars, and conferences. Development and Review: OM officials told us that they follow departmental guidance on determining whether other guidance should be categorized as significant or non-significant. OM developed guidance documents on FERPA, PPRA, and ESEA military recruiter provisions. A working group provided input into the draft guidance documents and recommended the appropriate level of departmental review. This group included officials from OM and the Office of the General Counsel (OGC) and other department program offices. OM has reached out to other federal agencies when necessary. For example, OM has worked with the Department of Justice on juvenile justice issues and with the Department of Health and Human Services (HHS) on guidance on the amendments made to FERPA by the Uninterrupted Scholars Act. Together, OM and HHS hosted a joint webinar on this amendment. OM also circulated draft guidance to OGC and appropriate components for review. OM followed the department’s significant guidance clearance process for all documents considered significant by OMB. Generally, non-significant guidance documents received a less formal clearance process. Dissemination: OM officials told us that they posted guidance on their website and e-mailed it out through a listserv. OM also introduced new guidance through webinars and conferences. Feedback on Guidance and Dissemination OM officials told us that they have received feedback on guidance through e-mails, interactions during conferences, and through OM’s technical assistance network. Office of Postsecondary Education (OPE) Overview What OPE Does OPE formulates federal postsecondary education policy and administers programs that address national needs in support of the mission to increase access to quality postsecondary education. Agency Use of Guidance 1. Dear Colleague Letters: Clarified regulations. 2. Electronic Announcements: Provided administrative information. 3. Frequently Asked Questions: Answered questions on postsecondary education policies and programs. Target Audiences The primary audience for OPE guidance is entities and individuals involved in postsecondary education. This includes institutions of higher education, postsecondary education associations and interest groups, lenders and guarantors, students, federal and state agencies, legal rights advocates, program partners, and grantees. Initiation, Development, Review, and Dissemination Initiation: OPE officials told us that questions from schools, program partners, and the public were typically the impetus for issuing guidance. This was particularly true if OPE had received a number of similar questions on a topic. Typically, it decided to issue guidance in close consultation with the department’s Office of the General Counsel (OGC). OPE officials also said that agency leadership initiated guidance development to accomplish administration priorities. Written guidance review policy? Review decisions documented? Development and Review: OPE officials told us that they used a standardized process to review all guidance and used a routing slip to document the review. OPE officials told us they used different routing slips for higher education program guidance and for guidance related to policy, planning, or innovation. OPE officials reviewed the draft guidance and, after final approval from the Assistant Secretary, submitted it for further review in the department and at OGC. OPE officials did not generally work with external stakeholders during the development of Dear Colleague letters and electronic announcements. However, if the guidance addressed an operational issue, OPE may consult with the National Association of Student Financial Aid Administrators or other relevant nongovernmental organizations. Website, e-mails, and conferences. Used web metrics to evaluate online guidance dissemination? Used web metrics to change online guidance dissemination? Dissemination: According to agency officials, OPE’s primary mechanism for disseminating regulatory guidance was through the Information for Financial Aid Professionals website. Guidance to grantees was mainly distributed through the OPE website. For example, OPE distributed guidance through national conferences for grantees, e-mail blasts to inform grantees of program changes and upcoming grant competitions, technical assistance webinars, and newsletters. OPE officials also said that they may contact external stakeholders (including those that represent students) as new guidance is being released to explain the guidance, establish the objective of issuing the guidance, and answer any questions that the stakeholders may have. OPE said guidance that has been superseded by new guidance was clearly marked accordingly to prevent confusion about which policies were in effect. Office of Postsecondary Education (OPE) OPE officials described a variety of ways in which they obtained feedback on guidance dissemination. For example, OPE was able to track how many members were on its listserv and to track undeliverable e-mail. According to OPE officials, program staff had considerable interaction with people in the field and they received a significant amount of feedback informally. Postsecondary Education: Many States Collect Graduates' Employment Information, but Clearer Guidance on Student Privacy Requirements Is Needed. GAO-10-927. Washington, D.C.: September 27, 2010. Grant Monitoring: Department of Education Could Improve Its Processes with Greater Focus on Assessing Risks, Acquiring Financial Skills, and Sharing Information. GAO-10-57. Washington, D.C.: November 19, 2009. Office of Special Education and Rehabilitative Services (OSERS) Overview What OSERS Does OSERS helps ensure that people with disabilities have equal opportunities and access to education, employment, and community living. Its mission includes administering the Individuals with Disabilities Education Act and the Rehabilitation Act of 1973. These statutes help states meet early intervention and educational needs of disabled children and youth, support state and private programs that provide people with disabilities the resources they need to gain meaningful employment and lead independent lives, and support research and development programs. Agency Use of Guidance 1. Dear Colleague Letters: Highlighted departmental or administration initiatives or provided program-related information. 2. Grants and Funding: Provided information on grants, funding opportunities, and other resources. 3. Frequently Asked Questions: Contained issue-specific guidance about a pending funding opportunity, provided background information, or clarified certain topics. 4. Model Individualized Education Program (IEP): Created in response to a statutory mandate to be used by advocates, parents, grantees, and school administrators. Initiation, Development, Review, and Dissemination Initiation: OSERS officials said that guidance was often produced in response to grantee inquiries or questions from stakeholders. OSERS has received feedback from quarterly meetings with grantees and technical assistance providers which resulted in the development of guidance. Further, OSERS has identified the need to clarify or issue guidance during the course of monitoring its grant programs. It has also convened a focus group with external stakeholders to identify needed guidance. Target Audiences Educational administrators, vocational rehabilitation administrators, grantees and potential applicants, and the special education and vocational rehabilitation communities, including advocates and parents of students with disabilities. Development and Review: OSERS officials told us that they followed departmental guidance on determining whether documents should be categorized as significant or non-significant. OSERS has an informal process for developing and reviewing guidance. Officials explained that there was no need to have a documented process because it had a congenial and close group of experienced OSERS staff with a clear understanding of policy. As needed, OSERS staff coordinated with staff from other departmental components. Written guidance review policy? Yes No ✔ Review decisions documented? Yes ✔ No Dissemination methods Websites, newsletters, webinars, e-mails, and mail. Dissemination: Officials said that OSERS typically posted guidance on its homepage as well as on idea.ed.gov. OSERS also uses its listserv to announce new guidance to grantees or contact information. Additionally, program officers held meetings with project directors during which new guidance was announced. OSERS also held webinars and frequently communicated with state associations to provide information about guidance. Lastly, OSERS highlighted new guidance in the department’s “Ed Review” newsletter or in the OSERS’ monthly newsletter. Feedback on Guidance and Dissemination OSERS officials noted that they received frequent and helpful feedback on guidance directly from the public through e-mails and phone calls. Specifically, a point of contact was identified on each piece of guidance through which feedback could be submitted. Office of Special Education and Rehabilitative Services (OSERS) Examples of Recent Guidance-Related GAO Reports Charter Schools: Additional Federal Attention Needed to Help Protect Access for Students with Disabilities. GAO-12-543. Washington, D.C.: June 7, 2012. Students With Disabilities: More Information and Guidance Could Improve Opportunities in Physical Education and Athletics. GAO-10-519. Washington, D.C.: June 23, 2010. Administration for Children and Families (ACF) Office of Child Care (OCC) OCC supports low-income working families by providing access to affordable, high-quality early care and afterschool programs. OCC administers the Child Care and Development Fund (CCDF) and works with state, territory, and tribal governments to provide support for children and their families juggling work schedules and struggling to find child care programs that will fit their needs and that will prepare children to succeed in school. transmit requirements to grantees: Information Collections: Typically related to reporting requirements for grantees. Information on Related Legislation: Used to transmit information about new legislation that affected the program. For example, OCC used PIs to issue information on American Recovery and Reinvestment Act (ARRA) funding. OCC has also issued PIs to grantees about targeted funds appropriated for specific activities. 2. Information Memorandums (IM): Used IMs to emphasize leadership or other legislative priorities and changes, including recommendations or encouragements, flexibilities in use of funds, and related information on partner agencies. Audiences for OCC guidance include grantees, the 50 states, the District of Columbia, five territories and 260 federally-recognized tribes. (Some of these tribes represent consortia of tribes) 3. Policy Interpretation Questions: Provided policy guidance in response to questions from the field. OCC officials told us they rarely use this type of guidance. Written guidance review policy? Initiation, Development, Review, and Dissemination Initiation: Information guidance was typically developed in response to feedback provided by regional office officials about questions received from grantees. Officials at the central office held monthly calls with the regional offices and a biweekly call with the regional program managers to supplement day-to-day communication. Review decisions documented? Website, regional office meetings with state representatives, regional and national conference calls, and webinars. Used web metrics to evaluate online guidance dissemination? Development and Review: The OCC Director was involved in the development of OCC guidance and reported to the Deputy Assistant Secretary. The Assistant Secretary reviewed any guidance that was considered new, or novel. ACF’s General Counsel reviewed all OCC guidance. PIs were flagged for leadership review at the beginning of the clearance process. Officials typically created a routing slip for each document. Officials did not typically provide draft guidance to external stakeholders for comment prior to issuance. They often shared draft guidance with regional offices. OCC officials cleared documents with other ACF offices when the subject matter was directly relevant to other programs. Used web metrics to change online guidance dissemination? Dissemination: Because the audience for guidance was the OCC grantees, officials typically posted new guidance on their website and then e-mailed it to the grantees. If the guidance was of interest to grantees, they often held a conference call with the grantees to explain the guidance. Administration of Children and Families (ACF) Office of Child Care (OCC) Highlighted Practices The Child Care and Development Block Grant (CCDBG) Act of 2014 reauthorized the child care program for the first time since 1996. In response to new requirements outlined in the law, OCC created a “Reauthorization Resource” webpage. The page featured an overview of the law and detailed the new health and safety requirements for child care providers and changes to eligibility policies. It also provided answers to frequently asked questions (FAQs). In those FAQs, OCC explained provisions of the law, clarified who is affected by the law, and stated that more detailed guidance on effective dates for certain program requirements in the new law would be forthcoming. OCC officials told us they had processes to ensure the currency of their guidance, including labeling if the guidance was not current or applied only to a certain fiscal year. OCC also maintained a Technical Assistance Network that could identify implementation issues through its work with grantees. This allowed OCC to receive comments from grantees online. OCC regional offices communicated more with the grantees through quarterly calls and often relayed any issues to headquarters officials. Guidance documents listed regional office officials as the point of contact for questions. Because the guidance was largely informational, officials had not issued revisions to their guidance. Administration for Children and Families (ACF) Office of Head Start (OHS) OHS administers grant funding and oversight to the agencies that provide OHS services. OHS also provides federal policy direction and a training and technical assistance system to assist grantees in providing comprehensive services to eligible young children and their families. requirements and submissions essential to program function. OHS officials told us they used PIs when discussing requirements of the OHS Act. 2. Information Memorandums (IMs): Provided informational and qualitative updates. 3. Policy Clarifications: Responded to questions received through the Early Childhood Knowledge and Learning Center (ECLKC) website. Audiences for OHS guidance included Head Start grantees. Initiation, Development, Review, and Dissemination Initiation: OHS officials told us they initiated guidance in response to confusion identified in the field. In addition, officials might also issue a piece of guidance if there was a change to statute or regulation of which their grantees should be aware. Feedback leading to guidance came from a variety of sources, including regional offices, conferences, directly from programs, and from calls to congressional offices that were communicated to the office. Review decisions documented? Yes ✔ No Electronically though ECLKC and e-mails to grantees. Development and Review: OHS officials told us that guidance development started at the program office level unless, for some types of guidance, the decision was made to engage earlier with General Counsel (GC). The unit’s division director and the Office of the Executive Secretary approved guidance. Guidance then went to the Director of the Office of OHS, GC, and finally to the Deputy Assistant Secretary or Assistant Secretary of ACF. The guidance process was documented and a review slip was used (see figure 8 below). Officials told us that they input information on the potential guidance onto the ACF policy calendar so that ACF officials could determine who should review and approve the documents. If guidance was relevant to another federal agency, OHS would work with the agency to write and approve the guidance. Dissemination: OHS officials told us they disseminated guidance electronically through ECLKC, which provided all grantees with access to key documents. They maintained a directory of grantees and could send information to the entire directory (anyone could sign up to receive information). They may broadly disseminate items or share them only with management staff, depending on the content. They also sent e-blasts with the guidance based on the audience and subject matter. Administration of Children and Families (ACF) Office of Head Start (OHS) Initiation, Development, Review, and Dissemination, cont. Initiation, Development, Review, and Dissemination Initiation: Before drafting, ACL officials told us they typically checked whether similar guidance had been issued. This allowed them to take prior guidance into consideration and to consult with lawyers to check on any legal issues they needed to be aware of before drafting began. Review decisions documented? Website, regional office meetings with state representatives, regional conferences, webinars, and newsletters. Used web metrics to evaluate online guidance dissemination? Development and Review: ACL officials told us that they circulated guidance for internal review to center directors and ACL’s Executive Secretary using an e-mailed routing slip. Officials wrote a memorandum to accompany the guidance for review if background or additional context was needed. ACL officials involved their lawyers during the internal review process when legal interpretation was needed. During this phase, ACL officials discussed whether external review was needed. External reviews were not typical, but ACL worked with OMB to ensure that its guidance accurately reflected new OMB instructions and federal requirements for grant programs. Used web metrics to change online guidance dissemination? Dissemination: Officials told us that all guidance was posted on ACL’s website. ACL also distributed new and updated guidance through regional conferences, webinars, newsletters, and the Federal Register. Regional offices met with their states in group meetings to review new guidance each quarter. From there, it was the responsibility of state representatives to pass that information on to their respective partners, with whom they have had more direct contact. Officials explained that sub-grantees were identified in the law and thus easily identifiable. Administration for Community Living (ACL) Highlighted Practices Guidance production at ACL often included a discussion of how best to publish and disseminate that guidance. In one example, ACL officials used a decision memorandum to accompany draft guidance that explained to reviewers (1) the issue the policy guidance document addressed, (2) provided background on the policy and impetus for the guidance, and (3) requested the reviewer’s signature. The background discussion in the decision memorandum included a plan for posting the policy guidance on the website and a discussion of other dissemination methods to publicize the guidance. The decision memorandum suggested that a blog post could be developed to publicize the guidance, or that ACL staff could reach out to key stakeholders when the policy was posted. ACL officials stated that for any type of policy guidance they would generally discuss how best to disseminate it and would follow up with staff to provide technical assistance or answer questions. Feedback on Guidance and Dissemination ACL officials told us that they received feedback on their guidance in a number of ways. ACL conducted quarterly and biennial meetings with grantees during which guidance was discussed. The field office also organized conference calls that covered guidance. ACL officials stated that one-on-one interactions were the most effective manner to receive feedback. There were program updates for grant programs that were re- released each year. Officials told us that the most common reason for guidance updates was to address questions received from the states, territories, and tribal organizations or from their regional offices; however, they noted that guidance did not change frequently. ACL received information from its information technology and communications offices on website analytics. Meanwhile, it was also redesigning its website. ACL officials told us that web metrics will be an important tool used to guide how it shares its content, including guidance, once the site is redesigned. In addition, officials told us that when ACL issued state plan guidance on the Older Americans Act, ACL headquarters and regional staff coordinated the dates for release, the timeline, and the process staff would use as follow- up. Bureau of International Labor Affairs (ILAB) Overview What ILAB Does ILAB improves working conditions, raises living standards, protects workers’ rights, and addresses the workplace exploitation of children and other vulnerable populations. Target Audiences Grantees that are subject to cooperative agreements with ILAB, including non-governmental organizations and members of the public who have knowledge of labor conditions and practices in countries with which the U.S. signs free trade agreements. Guidance related to cooperative agreements was generally not intended for widespread public consumption. Covered the receipt and handling of public submissions on labor provisions of U.S. free trade agreements. Written guidance review policy? Yes No ✔ 3. Research-Related Resources Initiation, Development, Review, and Dissemination Initiation: ILAB officials told us that its program offices initiated the development process when guidance was needed. The impetus for new guidance could be grantee questions, or a program office could determine that an adjustment in language was needed to clarify a requirement or that a new requirement was needed. Most ILAB guidance clarified government-wide grant regulations and was prompted by a change in those grant regulations. ILAB officials told us little, if any, clarification was required if there had not been a change in regulation. Review decisions documented? Yes ✔ No Dissemination methods Website, listserv, and press releases. Development and Review: ILAB officials told us that once the guidance was drafted, the Office of the Solicitor reviewed it. If the guidance was related to grants, the grant office also reviewed it. External stakeholders were not typically involved in developing grant-related guidance. In one instance, ILAB put forward guidance related to submissions on labor provisions of U.S. free trade agreements through formal notice and comment in the Federal Register. In another instance, for research-related resources ILAB sought feedback from interagency partners and through a peer review by external experts from business, academia, unions, and civil society groups. Officials told us that they did not have written procedures for their guidance processes. Dissemination: ILAB officials told us that they disseminated guidance directly to grantees as part of their cooperative agreement and via the office’s webpage and various listservs. Grant solicitations were posted on grants.gov. The Department of Labor’s Office of Public Affairs could also issue a press release. Officials told us that grantees could provide feedback to ILAB through its project managers. Depending on the subject matter, ILAB could also obtain public feedback through publishing a notice in the Federal Register that solicits public comments. Officials considered this to be an effective practice. Bureau of International Labor Affairs (ILAB) Examples of Recent Guidance-Related GAO Reports International Labor Grants: DOL's Use of Financial and Performance Monitoring Tools Needs to Be Strengthened. GAO-14-832. Washington, D.C.: September 24, 2014. International Labor Grants: Labor Should Improve Management of Key Award Documentation. GAO-14-493. Washington, D.C.: May 15, 2014. Bureau of Labor Statistics (BLS) BLS collects, analyzes, and disseminates economic information to support public and private decision making. BLS serves as a statistical resource for the Department of Labor. Agency Use of Guidance BLS officials told us that BLS awarded cooperative agreements to state agencies to conduct two cooperative statistical programs- Labor Market Information and Occupational Safety and Health Statistics. BLS issued routine administrative memoranda that contained reporting requirements and closeout procedures targeted to grantees. Initiation, Development, Review, and Dissemination Initiation: Because guidance was routine and issued annually, BLS officials told us that their process did not involve a specific impetus for initiation. The audience for BLS administrative memoranda was the 50 state agencies and territories that receive cooperative agreement funds and BLS Regional Commissioners. Written guidance review policy? Development and Review: BLS officials told us the routine administrative memoranda were sent out to the appropriate offices for review, including the BLS Branch of Grants and Funds Management, the Office of Field Operations, and BLS program offices. In addition, officials told us BLS had written procedures for the development of guidance, but no external stakeholders were involved during development. Review decisions documented? Dissemination: BLS officials told us BLS administrative memoranda were posted on the internal “Stateweb” website and e-mailed to the state agencies. BLS provided an intranet link to its grantees to access the stored documents. These documents were neither disseminated to the general public nor posted to the BLS public website. Internal “Stateweb” web site and e- mails to grantees with an intranet link. Feedback on Guidance and Dissemination Officials told us that most administrative memoranda were issued annually, so it was unnecessary to revise or update them or issue correction memoranda. BLS officials told us that sometimes BLS got feedback or questions on funding that it then answered. We did not evaluate BLS’s use of web metrics because it did not use its public website to disseminate guidance. Employee Benefits Security Administration (EBSA) Overview What EBSA Does EBSA promotes and protects the retirement, health, and other benefits of the more than 141 million participants and beneficiaries in more than 5 million private sector employee benefit plans. EBSA develops regulations, assists and educates workers, plan sponsors, fiduciaries, and service providers, and enforces the law. The Employee Retirement Income Security Act is enforced through regional and district offices nationwide and a national office in Washington, D.C. Agency Use of Guidance 1. Compliance Assistance Documents: Typically issued in response to requests for advisory opinions and included advisory opinions, information letters, interpretations, frequently asked questions, and technical releases. 2. Field Assistance Bulletins: Typically issued in response to issues identified by EBSA personnel, including the regional and enforcement staff who review them. 3. Technical Guidance for Consumers: Typically provided information to the public and included brochures, handouts, participant information, and press releases. Target Audiences Guidance was targeted to employee benefit plan participants and beneficiaries, sponsors, administrators, fiduciaries, service providers including large financial services firms, institutional record keepers and asset custodians and their representatives, EBSA regional and enforcement staff, and individual members of the public. Initiation, Development, Review, and Dissemination Initiation: Officials told us that any official in EBSA could initiate an idea for a new piece of guidance. For example, each EBSA office had a chain of command that employees could use to suggest ideas. However, the Assistant Secretary decided whether to start developing guidance based on discussion at regular executive staff meetings or with office directors and their management staff. Officials told us EBSA sometimes issued companion guidance to documents issued by other employee benefit regulators, such as the Pension Benefit Guaranty Corporation or the Internal Revenue Service. Written guidance review policy? Yes No ✔ Review decisions documented? Yes ✔ No Dissemination methods EBSA website, e-mails to website listserv, press releases, e-mails to stakeholders, calls, webcasts, meetings with constituents, and presentations at industry meetings. Used web metrics to evaluate online guidance dissemination? Yes ✔ No Used web metrics to change online guidance dissemination? Development and Review: EBSA officials told us office directors considered legal, policy, and programmatic factors and then developed guidance proposals to present to EBSA’s Assistant Secretary and Deputy Assistant Secretaries for approval. Their procedures for guidance clearance were dependent on the type of guidance. Guidance interpreting regulations triggered a different level of review than informational guidance and was cleared through the Director of the Office of Regulations and Interpretations and/or the Director of the Office of Health Plan Standards and Compliance Assistance, the Office of the Solicitor, and the Deputy Assistant Secretary for Program Operations. EBSA’s Assistant Secretary cleared all guidance except for very routine matters and typically alerted department leadership upon release of the guidance. EBSA officials regularly discussed the status of draft guidance using a written agenda of pending regulations, exemptions, and active guidance products in a weekly meeting with key EBSA and departmental officials. Officials told us that the need for departmental review depended on various factors, including likely congressional interest, potential impacts on areas regulated by other Department of Labor (DOL) agencies, and expected media coverage. EBSA did not use a formal or codified routing slip. Instead, it used e-mail to contact the Assistant Secretary and Deputy Assistant Secretaries after guidance was developed and vetted through the appropriate national and/or regional office components and the Office of the Solicitor. Initiation, Development, Review, and Dissemination, cont. keepers and asset custodians, and the financial services industry. Dissemination to these groups was relatively easy. Officials told us this audience closely followed updates to the EBSA website and characterized this audience as resourceful and very vocal. EBSA officials usually knew whether the guidance had been received and was clear. However, officials also told us that EBSA had challenges reaching their other audience, which included small- to medium- sized employers and the participants in the 5-to-6 million existing employee plans. To reach this audience, they used a “multi-pronged strategy,” including posting guidance on their website, which has a dedicated page for guidance, e-mailing and meeting with stakeholders, and webcasts. Any special guidance could be posted on the “New and Noteworthy” portion of their website and could be e- mailed to EBSA’s website listserv, which had about 336,000 subscribers. Labor’s Public Affairs office assisted by drafting press releases and handling press calls. EBSA officials told us that EBSA sometimes revised and updated its guidance and identified which documents had been superseded by the new guidance. EBSA officials told us that both new and replaced guidance documents were posted on EBSA’s website, which was actively monitored by the regulated community and media outlets that focus on labor and benefit issues. Various media reports and benefit-specific websites could also provide information on new guidance. Officials also told us that the most effective means of soliciting feedback on guidance had generally been to post the guidance documents on EBSA’s website. To obtain input on how to improve the quality of guidance and gauge whether guidance has reached the intended audiences, EBSA met regularly with stakeholder associations, individual companies, and consumer groups. EBSA also participated in educational conferences sponsored by industry groups and interacted with the Employee Retirement Income Security Act Advisory Council. 401(K) Plans: Improvements Can Be Made to Better Protect Participants in Managed Accounts. GAO-14-310. Washington, D.C.: June 25, 2014. Private Pensions: Clarity of Required Reports and Disclosures Could Be Improved. GAO-14-92. Washington, D.C.: November 21, 2013. Private Pensions: Revised Electronic Disclosure Rules Could Clarify Use and Better Protect Participant Choice. GAO-13-594. Washington, D.C.: September 13, 2013. 401(K) Plans: Labor and IRS Could Improve the Rollover Process for Participants. GAO-13-30. Washington, D.C.: March 7, 2013. Employment and Training Administration (ETA) Overview What ETA Does ETA provides job training, employment, labor market information, and income maintenance services, primarily through state and local workforce development systems. ETA also administers programs to enhance employment opportunities and business prosperity. Agency Use of Guidance 1. Training and Employment Guidance Letters (TEGL): Issued to a broad audience of states and sub-grantees, they transmitted both policy and operational guidance. Often used for programs to identify funding allotments. 2. Training and Employment Notices (TEN): Informational notices directed to broad audiences. Officials told us that they used TENs to communicate technical assistance resources, publications, and updates on research and evaluation and the status of available agency issuances. Target Audiences Audiences for guidance at ETA are both narrow and broad, depending on the program. ETA funds American Jobs Centers, which has guidance targeting a narrow audience of those that administer these centers, while other programs are broader and have guidance that could be directed to a wide range of people. Other audiences include Workforce Investment Boards, governors, both discretionary grantees and formula subgrantees, and state workforce agencies, among others. 3. Unemployment Insurance Program Letters (UIPL): Interpreted statute or policy in the form of guidance. For example, when unemployment insurance was extended and then terminated, ETA issued guidance more than a dozen times from June 2008 to December 2013. 4. Program Information Notices (PINs): Information for Job Corps Centers, which were operated both by private contractors and the Forest Service in the Department of Agriculture in partnership with the Department of Labor (DOL). Written guidance review policy? Yes ✔ No 5. Frequently Asked Questions Initiation, Development, Review, and Dissemination Initiation: ETA officials told us they initiated guidance in response to issues that arose in the field, and worked with ETA program and policy officials as well as discussed the issue with their Office of the Solicitor (SOL) colleagues and the program office. Review decisions documented? Yes ✔ No Dissemination methods Website, e-mail blasts to the public, e-mails to regional offices, webinars, site visits, phone calls, and conferences. Development and Review: ETA used a standard operating procedure for reviewing guidance documents and a routing slip for internal review. Officials told us guidance went through multiple review processes. The officials included in the review process varied depending on the nature of the guidance. Usually, the program office, policy office, SOL, and leadership office all signed off on the guidance. The Office of the Assistant Secretary for Policy (OASP) signed off on guidance documents that were “major priorities.” Officials also discussed what level of review the guidance required with the Office of the Executive Secretary. Officials told us that when making review decisions they considered (1) the amount of money affected by the guidance, (2) how much of a priority the guidance was, and (3) whether other sub-agencies within DOL would be interested in the guidance. If the proposed guidance required internal clearance, they included OASP, the Offices of Congressional Affairs, the Secretary, and Public Affairs and alerted Cabinet Affairs staff. If regional offices would be affected, they concurred on the draft guidance. Employment and Training Administration (ETA) Initiation, Development, Review, and Dissemination, cont. sign up to receive an e-mail when a new advisory was available. Separate e-mails were sent to the regional offices, which then followed up with grantees about new guidance. ETA officials told us they also informed OASP and the Office of Congressional Affairs. ETA officials hosted a webinar to provide technical assistance and answer any questions that grantees had. The regional offices also had site visits and phone calls with grantees in which they discussed advisories and guidance as needed and occasionally attended conferences. Feedback on Guidance and Dissemination Officials told us they were asked to identify whether to continue, cancel, or rescind the guidance on the ETA website annually. They told us they routinely monitored grantees and had conversations with intergovernmental organizations to gain insights into potential changes to guidance. They received feedback from program offices, regional offices, intergovernmental organizations, and agency leadership on the content of their guidance. Workforce Investment Act: DOL Should Do More to Improve the Quality of Participant Data. GAO-14-4. Washington, D.C.: December 2, 2013. H-2A Visa Program: Modernization and Improved Guidance Could Reduce Employer Application Burden. GAO-12-706. Washington, D.C.: September 12, 2012. Mine Safety and Health Administration (MSHA) Overview What MSHA Does MSHA promulgates and enforces mandatory health and safety standards by thoroughly inspecting mines; targeting the most common causes of fatal mine accidents and disasters; reducing exposure to health risks from mine dusts and other contaminants; improving training, particularly for inexperienced miners and contractors; strengthening MSHA and the industry’s emergency response preparedness; enforcing miners’ rights to report hazardous conditions without fear of retaliation; and emphasizing prevention. MSHA also assists states in the development of effective state mine safety and health programs and contributes to mine safety and health research and development. Agency Use of Guidance 1. Program Information Bulletins: Provided information and best practices to mine operators, miners, and MSHA enforcement officials. 2. Program Policy Letters: Explained regulations to mine operators, miners, and MSHA enforcement officials. 3. Procedural Instruction Letters: Instructed MSHA’s staff on procedures for enforcing applicable standards. 4. Program Policy Manual: Consolidated MSHA policies. 5. Handbook: Provided instructions to MSHA inspectors and specialists. 6. Compliance Guidance and E-Laws: Posted on MSHA’s website. 7. Best Practice Pocket Cards: Provided miners with health and safety information, including an explanation of their rights 8. Frequently Asked Questions: Explained MSHA standards and regulations to operators and miners. Target Audiences Coal mine operators, metal and non-metal mine operators, unions, associations, and safety and health professionals. Initiation, Development, Review, and Dissemination Initiation: MSHA officials told us that new or revised guidance was typically initiated in response to questions from the field or issues identified by miners, operators, or MSHA Field Managers. MSHA officials discussed whether to issue guidance or undertake the rulemaking process with the Office of the Solicitor, the Office of Standards, Regulations, and Variances and the Office of the Assistant Secretary. MSHA also issued guidance as part of the normal rollout of new standards or regulations. Written guidance review policy? Review decisions documented? Dissemination methods Website, e-mails, compliance visits and other meetings, and through partners. Development and Review: MSHA had written procedures for guidance formulation, distribution, and maintenance. Administrators and Directors initiated guidance that was reviewed by appropriate officials. MSHA officials told us that guidance went through multiple reviews by affected programs, and significant guidance was flagged during the review process. For urgent guidance documents that need to be disseminated quickly (for example, hazard alerts and information on respiratory protective devices), the review process was shortened and senior management were involved earlier in the process. The Directorate of Program Evaluation and Information Resources (PEIR) coordinated and monitored guidance development and clearance. PEIR’s Office of Program Policy Evaluation (OPPE) officials managed the directives process and used a form to manage and track the review and dissemination of directives. Mine Safety and Health Administration (MSHA) Initiation, Development, Review, and Dissemination, cont. miners, labor organizations, industry associations, and other stakeholders when guidance was developed or to introduce new standards or regulations to explain and discuss the guidance. MSHA’s Field Managers also discussed guidance with miners and operators at multiple compliance visits each year. Feedback on Guidance and Dissemination MSHA officials told us they had developed procedures to ensure that programs periodically reviewed and updated guidance documents. They revised previously issued guidance if they determined that the guidance was out of date due to advances in technology or if other new information was available from stakeholders. MSHA officials told us that the most effective way to gather feedback on guidance is by speaking with stakeholders. MSHA officials told us that OPPE also had developed policies to review existing guidance to ensure that it was valid and that MSHA had made changes to its website to help the public easily find guidance information. Examples of Recent Guidance-Related GAO Reports Mine Safety: Basis for Proposed Exposure Limit on Respirable Coal Mine Dust and Possible Approaches for Lowering Dust Levels. GAO-14-345. Washington, D.C.: April 9, 2014. Mine Safety: Additional Guidance and Oversight of Mines' Emergency Response Plans Would Improve the Safety of Underground Coal Miners. GAO-08-424. Washington, D.C.: April 8, 2008. Occupational Safety and Health Administration (OSHA) Agency Use of Guidance Policy issuances—established internal policies or policy interpretations. OSHA assures safe and healthful working conditions for men and women by promulgating protective health and safety standards; enforcing workplace safety and health rules; providing training, outreach, education, and assistance to workers and employers in their efforts to control workplace hazards; preventing work-related injuries, illnesses, and fatalities; and partnering with states that run their own OSHA-approved programs. OSHA officials told us policy issuance documents have been used to explain internal procedures for inspections and interpretations of regulations for specific programs. An example of a policy issuance was the inspection procedure directive, which contains procedures used to investigate and cite violations of particular OSHA regulations. Non-policy issuances—provided information consistent with regulations. 1. Fact sheets, information sheets, hazard alerts, and small entity compliance guidance: Provided hazard identification and prevention information on critical safety and health hazards that often must be disseminated quickly. 2. Booklets: Provided information for constituents at all education levels. 3. Fatal Facts: Contained information about how to identify and prevent hazards that lead to fatalities at worksites. Written for employers, safety and health professionals, and workers. Employers and their representatives, such as trade associations; covered workers and their representatives, such as unions, community groups and worker centers; and other safety and health professionals. 4. Quick cards: Small laminated cards that provided safety and health information for employers, professionals, and workers with some safety and health background. 5. Low-literacy materials: For workers and employers with limited English proficiency and young workers. Written guidance review policy? Review decisions documented? 6. Letters of interpretation: Clarified ambiguities in regulations. Initiation, Development, Review, and Dissemination Initiation: OSHA officials told us that they were often prompted to issue or revise guidance for clarification in response to feedback from regional offices and external stakeholders with questions on existing guidance. OSHA website, hard copy delivery to area offices, mass mailings to employers, webinars, outreach in the school system, social media, newsletter (QuickTakes), and e- mail. Used web metrics to evaluate online guidance dissemination? Development and Review: Officials told us policy issuances were cleared by the Office of the Solicitor and OSHA leadership, and were sometimes sent to the Office of the Executive Secretariat for coordination of department-level review. OSHA program directors obtained input and technical and policy clearance for both policy and non-policy issuances from each of the other program directors and their directorate offices and resolved any comments. The final draft was sent to the Director of Administrative Programs for approval. The Deputy Assistant Secretary addressed unresolved disagreements concerning the substance or policy implications of proposed policy guidance. Officials told us that in some circumstances, OSHA sought expert input or input of the target audience for non-policy guidance materials to provide the most accurate and applicable information—and to make sure it is accessible—for workers and employers on a specific topic. Occupational Safety and Health Administration (OSHA) Highlighted Practices OSHA had separate written procedures in the form of instructions for both policy and non-policy issuances. Policy issuances are internal directives and supplementary guidance that have implications for internal statements of policy and procedure, while non-policy issuances are technical and educational guidance documents that provide information consistent with regulations and include such supplementary guidance materials as letters of interpretation and other non-policy statements issued by OSHA. These procedures outlined the roles and responsibilities for the Assistant Secretary of Labor for Occupational Safety and Health, OSHA program directors, and the Director of Administrative Programs and identified conditions under which guidance should be published. Initiation, Development, Review, and Dissemination, cont. produces an e-mail based newsletter called QuickTakes that publicized new policy and non-policy guidance documents. Area offices distributed new educational guidance materials to the stakeholders who were difficult to contact electronically. They also conducted mass mailings and webinars, posted on social media sites, and reached out at conferences and schools. Dissemination also occurred with cooperative program participants such as Alliance and Partnership members and state partners. Participants and partners disseminate materials and guidance to members and constituencies. Feedback on Guidance and Dissemination OSHA officials told us they tracked and evaluated guidance to determine whether to revise guidance. If a guidance product was written for a specific OSHA standard that had not changed, revisions were infrequent. Guidance would be updated if based on a standard that has changed or a hazard where new information was available to assure workers are protected. Workplace Safety and Health: OSHA Can Better Respond to State-Run Programs Facing Challenges. GAO-13-320. Washington, D.C.: April 16, 2013. Workplace Safety and Health: Further Steps by OSHA Would Enhance Monitoring of Enforcement and Effectiveness. GAO-13-61. Washington, D.C.: January 24, 2013. Office of Disability Employment Policy (ODEP) Overview What ODEP Does ODEP seeks to increase the number and quality of employment opportunities for people with disabilities by promoting the adoption and implementation of its policy strategies and effective practices and bringing focus to the issue of disability employment. Target Audiences People with disabilities, employers (in both the private and public sectors), service providers, and government entities. employers (in both the private and public sectors), service providers, and government entities. While ODEP does not have regulatory authority, it has assisted enforcement agencies in reaching stakeholders in the disability community regarding regulations that impact them. ODEP also provided input on guidance issued by other Department of Labor (DOL) components, such as the Employment and Training Administration, related to disabilities. Review decisions documented? Yes No ✔ Dissemination methods Website, e-mail, auxiliary websites such as Disability.gov, listening sessions, and webinars, social media, public service announcements, speaking engagements, and press releases. Guidance Initiation, Development, Review, and Dissemination Initiation: ODEP officials told us that items addressing how to comply with regulations, if developed, would emanate from ODEP’s policy or research team supervisors. The item would then require input and approval from other team leads at the agency, the Executive Officer, the Deputy Assistant Secretary, the Chief of Staff, and the Assistant Secretary. ODEP would work closely with DOL’s enforcement agencies and the Solicitor’s Office in developing all such items. All non-regulatory items produced for the public would emanate from ODEP’s policy, outreach or research team supervisors. The item would also require input and approval from other team leads at the agency, the Executive Officer, the Deputy Assistant Secretary, the Chief of Staff, and the Assistant Secretary. Development and Review: According to ODEP officials, all guidance was reviewed by either the Outreach Supervisor or the Executive Officer to determine the level of both internal or external review needed (both intraagency and interagency). Guidance was cleared internally by the Outreach Supervisor or the Executive Officer, all relevant Policy, Administrative and/or Research Supervisors, the Deputy Assistant Secretary, the Chief of Staff, and the Assistant Secretary. In addition, there is an expectation that items affecting stakeholders or touching legal or policy issues were cleared by the relevant DOL agencies, such as the Office of the Assistant Secretary of Policy, the Office of Public Affairs, the Solicitor’s Office, the Office of Congressional and Intergovernmental Affairs, the Office of the Secretary, or affected DOL program agencies. ODEP worked closely with all these agencies to determine if further external review was required, including the Office of Management and Budget or other relevant federal agencies. Office of Disability Employment Policy (ODEP) Initiation, Development, Review, and Dissemination, cont. disseminated guidance through their website. Officials used the “Gov Delivery” e-mail subscription service to disseminate guidance and information to about 50,000 subscribers. ODEP officials also reached out to their employer stakeholders and maintained an online community of practice. ODEP held primary responsibility for managing the Disability.gov website. In addition to these methods of dissemination, ODEP officials said that each of their four technical centers maintained its own website and offered webinars throughout the year. Feedback on Guidance and Dissemination ODEP officials told us that items shared with the public were reviewed regularly and updated so that the information was current and relevant. ODEP officials told us they had just started to widely use web metrics for their website and intended to use the information gathered to improve how they communicated with the public. Survey tools were consistently used to evaluate how Disability.gov served the public and that information was used to improve its service model. Office of Federal Contract Compliance Programs (OFCCP) OFCCP administers and enforces equal opportunity mandates prohibiting federal contractors and subcontractors from discriminating on the basis of race, color, religion, sex, national origin, disability, or protected veteran status and require federal contractors and subcontractors to take affirmative steps to ensure equal employment opportunities. anticipated to have operational impact or to result in enforcement action. 2. Federal Contract Compliance Manuals: Explained broad OFCCP policy. 3. Frequently Asked Questions (FAQ) 4. Fact Sheets and Brochures: Targeted to a more general audience or the public. 5. Technical Assistance Guides: Assisted federal contractors and subcontractors in complying with laws and regulations on employment discrimination and equal employment. The audience for most OFCCP guidance and technical assistance is contractors, particularly new contractors. Guidance was also targeted to communities from whom OFCCP most often received violation complaints. These communities represent all protected classes. 6. Equal Employment Opportunity Posters: Employers covered by non-discrimination and equal employment opportunity laws were required to display posters on their premises. Initiation, Development, Review, and Dissemination Initiation: OFCCP officials told us that after they identified an issue that might require guidance, they discussed it with the Office of the Solicitor (SOL). Written guidance review policy? Yes ✔ No Review decisions documented? Yes ✔ No Website, e-mails, webinars, public meetings, and media outreach. Development and Review: OFCCP officials told us that the national office typically developed guidance with feedback from SOL and comments from the regions. OFCCP guidance received varying levels of review based on the type of guidance, but SOL reviewed almost all guidance. Directives were more formal, used a template, and were always reviewed and signed by the OFCCP director. FAQs were generated internally and might not rise to the level of director review. Departmental officials and officials from other Department of Labor (DOL) or components reviewed guidance (1) on a sensitive subject, (2) for which heightened scrutiny was anticipated, (3) that might affect other DOL programs or agencies, (4) that might be considered newsworthy, or (5) that is part of an initiative of the Administration. OFCCP officials used a routing slip to document clearance and its processes for review were written in administrative procedures. See figure 12 below. Dissemination: OFCCP officials told us they had recently standardized the process to centrally monitor and control the guidance dissemination process. To supplement sending new guidance to the regional office, officials also sent an e-blast to more than 54,000 e-mail subscribers explaining (1) why guidance was issued, (2) what the guidance was, and (3) where the guidance could be found. If appropriate, the e-blast directed the subscriber to a hyperlink to guidance on OFCCP’s website. OFCCP also used media packets, toolkits, webinars, and public appearances to further publicize new guidance. OFCCP posted all guidance intended for the public online. Office of Federal Contract Compliance Programs (OFCCP) Highlighted Practices In 2011, OFCCP officials started a 2-year project to review their directives system. They told us that this effort was intended to make their guidance more accurate and correct. As part of these efforts, they identified necessary updates to guidance, clarified superseded guidance, and rescinded guidance when appropriate, reducing the original number of directives by 85%. Initiation, Development, Review, and Dissemination, cont. OLMS conducts criminal and civil investigations to safeguard the financial integrity of unions and to ensure union democracy. The Office conducts investigative audits of labor unions to uncover and remedy criminal and civil violations of the Labor-Management Reporting and Disclosure Act and related statutes and explains the reporting, election, bonding, and trusteeship provisions of the act. The OLMS promotes labor union and labor-management transparency through reporting and disclosure requirements. either stand alone or be issued to accompany a regulation. Standalone facts sheets often explained what statute required. 2. Guides for Union Officers: Provided general information on requirements that applied to unions and union officers, and offered suggestions on how to comply with those requirements. 3. Guidance: Information on how a complaint with OLMS could be filed. Initiation, Development, Review, and Dissemination Initiation: OLMS officials told us that the impetus for guidance varied. Guidance was initiated if (1) numerous unions had similar questions after a new regulation had been finalized about compliance, (2) officials had issued a regulation or were about to issue one, or (3) field personnel encountered a consistent issue through OLMS’s compliance assistance programs. OLMS received feedback when its program staff reached out to union officials. However, OLMS did not typically initiate new guidance and rather answered questions individually. If officials saw questions come into their OLMS-Public@dol.gov e-mail box on a related issue, OLMS program officials flagged the questions as a potential impetus for new guidance. 1) Union officials and union members, 2) employers with a union or with a workforce attempting to unionize, and 3) labor relations management organizations. Written guidance review policy? Review decisions documented? OLMS website, field staff, webinars, and listserv e-mails. Used web metrics to evaluate online guidance dissemination? Development and Review: Officials stated that guidance development depended on the impetus. OLMS officials stated that most often the director or other senior manager decided to initiate guidance and then tasked the drafting of the guidance to program staff. OLMS had no written procedures for the guidance production process. After the program staff drafted the guidance, it went to the Division Chief, then the Deputy Director, then the Director. OLMS divisions and the Office of the Solicitor (in particular the Division of Civil Rights and Labor Management) were also involved at this stage of the guidance process. Documentation of concurrence on draft guidance depended on the type of guidance. Some documents were typically approved through e-mail, while for other documents a physical folder with the sign-off chart was used for stakeholder initials of concurrence. OLMS officials stated that they drafted decision memorandums (typically used for regulation) to accompany draft guidance when departmental clearance was required. OLMS officials told us they rarely coordinated with other federal agencies when developing and reviewing guidance, although they recently coordinated with the Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) on development and review of guidance. Used web metrics to change online guidance dissemination? Office of Labor-Management Standards (OLMS) Initiation, Development, Review, and Dissemination, cont. hosted webinars. OLMS had a listserv to notify interested parties when there was a new regulation or guidance being issued. Feedback on Guidance and Dissemination OLMS officials told us they found webinars to be the most effective way to solicit feedback. Officials solicited questions from listeners at webinars and contacted those with questions directly. Officials also received questions and comments from the public through the OLMS- Public@dol.gov e-mail box. Questions typically related to how forms should be filled out, and officials contacted the person commenting directly. If the question was substantive, they would raise it to a higher level to address the comment. As of October 2014, OLMS officials told us they had not received substantive comments related to guidance dissemination. However, they had received feedback on technical issues related to locating materials, navigating the website, and ease of accessibility. OLMS officials said that they had not assessed whether their guidance has been effective. Office of Workers’ Compensation Programs (OWCP) Overview What OWCP Does OWCP protects the interests of workers who are injured or become ill on the job. OWCP serves specific employee groups that are covered under four major disability compensation statutes by mitigating the financial burden resulting from workplace injury or illness and promoting return to work when appropriate. Agency Use of Guidance 1. Training materials: OWCP posted training materials directed to internal claims examiners to the web. 2. Industry notices: Procedural information sent to employers, carriers, medical providers, and injured workers. 3. Compliance Guidance: Issued for Black Lung and Longshore programs to explain certain new regulations and how small entities could comply with them. 4. Educational Guidance: Included medical benefit guides, frequently asked questions, webinars, and other educational materials. Target Audiences The audience for OWCP guidance included beneficiaries and employers. Initiation, Development, Review, and Dissemination Initiation: OWCP officials told us that program directors decided to initiate guidance in response to (1) questions from users or confusion in the field, (2) new procedures, (3) new initiatives, (4) litigation, or (5) results of accountability reviews. Written guidance review policy? Yes No ✔ Review decisions documented? Yes ✔ No Dissemination methods Website, emails, specific task forces, and meetings with unions. Development and Review: OWCP officials told us they developed and issued guidance with the assistance of the Office of the Solicitor. The development of guidance was typically informal and was conducted through email, revisions, and comments in documents. OWCP officials told us they had an informal review process for guidance within each program. OWCP officials used a routing slip and hierarchical process for document review. A guidance document could be reviewed by the Department based on the significance and sensitivity of the issue. Used web metrics to evaluate online guidance dissemination? Yes No ✔ Used web metrics to change online guidance dissemination? Dissemination: OWCP officials told us that guidance documents were posted to the OWCP website and disseminated through email blasts to stakeholders. For example, there were 12,000 subscribers to their Longshore Electronic Filing email list. OWCP officials told us that they discussed who may want or need the information before placing the information on their website. OWCP had a joint task force which disseminated information regarding the Energy program primarily to the concentrated population affected in jurisdictions where there have been nuclear activities. OWCP also held routine meetings with unions, advocacy groups, and other stakeholder groups and maintained lists of these groups for dissemination purposes. Feedback on Guidance and Dissemination OWCP officials told us that program directors reviewed guidance on an ongoing basis and updated guidance as necessary. Veterans’ Employment and Training Service (VETS) Overview What VETS Does VETS is responsible for administering veterans’ employment and training programs and compliance activities that help veterans and service members succeed in their civilian careers. Target Audiences Program service providers, including agency grantees, sub- grantees, and other entities that provide services to veterans, transitioning service members, and other eligible persons. grant administration, including formula funding updates. 3. Solicitations for Competitive Grant Applications 4. Frequently Asked Questions: Clarified application and program details. Written guidance review policy? Guidance Initiation, Development, Review, and Dissemination Initiation: According to VETS officials, the primary impetus for guidance was to maximize the effectiveness of their programs by issuing updated and clarifying guidance that is easily understood and can be carried out effectively. Review decisions documented? Dissemination methods Website, e-mail, social media, and through regional administrators and state workforce grantees. Development and Review: VETS officials told us that because the component is small, it was easy to complete internal review of their documents. Officials used a routing slip to circulate draft guidance to the Offices of the Solicitor, Deputy Secretary, Assistant Secretary, and Secretary. Departmental officials reviewed guidance if it represented a major policy change or affected other Department of Labor agencies and other departments. Used web metrics to evaluate online guidance dissemination? Dissemination: VETS posted all publicly available documents on its websites. Officials told us they also relied on regional administrators and state workforce grantees to disseminate guidance related to grants. When VETS issued technical assistance on competitive grants, it sent out an e- mail blast to notify grantees. Used web metrics to change online guidance dissemination? Feedback on Guidance and Dissemination VETS officials told us they conducted periodic meetings and listening sessions with Veterans Service Organizations and stakeholders. VETS coordinated closely with the Departments of Defense and Veterans Affairs. The coordination requirements were included in a memorandum of understanding. VETS officials also participated in forums such as the Interagency Council on Homelessness and the Transition Assistance Program Executive Committee. VETS’ practice had been to issue new guidance annually. As of January 2015, it was now working to give guidance a longer “shelf life” by, for example, only revising due dates for routine reports rather than reissuing the entire guidance. Examples of Recent Guidance-Related GAO Reports Veterans’ Employment and Training: Better Targeting, Coordinating, and Reporting Needed to Enhance Program Effectiveness. GAO-13-29. Washington, D.C.: December 13, 2012. Wage and Hour Division (WHD) WHD enforces federal minimum wage, overtime pay, recordkeeping, and child labor law requirements of the Fair Labor Standards Act. WHD enforces a number of other worker protection laws including those concerning family and medical leave, migrant and seasonal farm workers, and several temporary foreign worker visa programs. WHD also administers and enforces the prevailing wage requirements applicable to federal contracts for construction and for the provision of goods and services. Employers and employees. Written guidance review policy? Yes No ✔ Review decisions documented? Yes ✔ No WHD website, webinars, trainings, and outreach. Initiation, Development, Review, and Dissemination Initiation: WHD officials told us they initiated new guidance in response to (1) issues identified by WHD’s investigators in their field offices, (2) questions from outreach and education sessions with employers and employee groups, (3) recurring questions received in correspondence, (4) feedback from stakeholders on specific programs, (5) information collected on the types and frequency of questions at WHD’s national call center, and (6) interaction with other federal and state agency partners. Many officials and offices within the Department of Labor (DOL) were involved in WHD’s decisions to initiate guidance, including the Office of the Solicitor, Office of the Assistant Secretary for Policy, national and regional WHD offices, and other enforcement personnel. Development and Review: WHD officials told us that they had a multilayered review process that differed based on the substance of the proposed guidance. Officials documented review with a routing slip. To clear guidance materials, WHD worked with the Office of the Solicitor and, depending on the substance or public interest, may have worked with other offices within the Department as appropriate. Some of the factors that may be considered in guidance being reviewed at the departmental level include: whether the guidance involved a new interpretation or policy or whether it could impact other DOL programs. Wage and Hour Division (WHD) Initiation, Development, Review, and Dissemination, cont. depended on the type of guidance. WHD staff routinely conducted webinars, training, and outreach during which guidance was distributed. For example, they have hosted webinars, conference calls, meetings and presentations, including engaging relevant state associations, consumers, disability and senior citizens’ advocates, worker representatives, and industry groups. Feedback on Guidance and Dissemination WHD officials told us that they revised guidance “as appropriate.” As of January 2015, WHD officials told us they did not have a systematic way to determine whether end users were accessing their guidance. WHD officials told us that they meet with stakeholders, including employers, human resources organizations, attorneys, employees, worker organizations, and unions to hear their views on areas WHD can provide additional guidance to achieve better compliance with the worker protection laws it administers. Examples of Recent Guidance-Related GAO Reports Fair Labor Standards Act: Department of Labor Needs a More Systematic Approach to Developing Its Guidance, GAO-14-629T. Washington, D.C.: July 23, 2014. Fair Labor Standards Act: Department of Labor Needs a More Systematic Approach to Developing Its Guidance, GAO-14-69. Washington, D.C.: December 18, 2013. Overview What Women’s Bureau Does The Women’s Bureau is responsible for promoting the status of wage-earning women, improving their working conditions, increasing their efficiency, and advancing their opportunities for profitable employment. The Women’s Bureau also focuses on the needs of vulnerable women in the workforce. Agency Use of Guidance 1. Informational Fact Sheets: Used to support the Women’s Bureau’s role in disseminating its own research. 2. Technical Assistance: Women’s Bureau officials told us 10 regional offices as well as staff in the national office provided technical assistance. Target Audiences Either targeted groups or broadly to more than 50,000 people. The target audiences for its informational fact sheets are typically working women, their employers, and organizations that represent them. 3. Frequently Asked Questions Initiation, Development, Review, and Dissemination Initiation: When deciding to initiate guidance, officials told us that the relevant program office typically decided whether a fact sheet was needed. It then either researched the subject itself or contracted out for the research. Leadership sometimes initiated guidance if it was on a “burning issue.” The ideas for potential Women’s Bureau initiatives and/or academic research came from all staff—both in the national and regional offices. These ideas stemmed from internal research, including current news and events. Written guidance review policy? Yes No ✔ Review decisions documented? Yes ✔ No Dissemination methods Website, e-mail, social media, workshops, and listening sessions. Development and Review: Women’s Bureau officials told us that after initiating guidance, the relevant program office drafted the guidance document. Managers and staff in the Women’s Bureau’s Office of Policy and Programs reviewed the draft guidance document before it was reviewed by the Deputy Directors and Director. Officials used a routing slip or memo to guide the draft through the review process and to document the review. Women’s Bureau officials told us they worked closely with the Office of the Solicitor, the Office of the Assistant Secretary for Policy, the grant officer, and other impacted Department of Labor components as they drafted and reviewed guidance. They also used the Departmental “Secretary’s Information Management System” program for internal tracking. Fact sheets were shared with external stakeholders when the guidance was issued. Dissemination: Women’s Bureau officials told us that most documents and other resources were available on their website and they used the “Gov Delivery” system to disseminate new information and documents. Officials also disseminated information through their Director’s blog and through workshops conducted by regional office staff, such as those that led up to the national White House Summit on Working Families. Listening sessions were another example of dissemination conducted by the regional offices. Officials told us they received feedback from end users though e-mails, phone calls, and comments made by participants at conferences. Michelle Sager at (202) 512-6806 or sagerm@gao.gov. In addition to the contact named above, key contributors to this report were Tim Bober, Assistant Director, Alexandra Edwards, Sandra Baxter, Shirley Hwang, Shelby Kain, Andrea Levine, Sarah Sheehan, and Wesley Sholtes. In addition, Jacques Arsenault, James Bennett, Marcia Crosse, Rachel Frisk, Ricky Harrison, Anne K. Johnson, Sarah Kaczmarek, Jacqueline M. Nowicki, Cynthia Saunders, Stewart W. Small, and Betty Ward-Zukerman provided key assistance.
Agencies rely on guidance to clarify regulatory text or statutes, to respond to the questions of affected parties in a timely way, and to inform the public about complex policy implementation topics. Unlike regulations, guidance is not legally binding. GAO was asked to examine guidance processes at four departments. This report reviews how (1) agencies use guidance and decide to issue guidance rather than regulations; (2) follow applicable criteria and leading practices in their policies, procedures, and practices for producing guidance; and (3) agencies disseminate guidance to ensure public access and feedback. GAO reviewed guidance processes at all 25 components in the four departments that (1) were within the requesting committee's jurisdiction, and (2) engaged in regulatory or grant activities. GAO reviewed relevant requirements, written procedures, guidance and websites, and interviewed agency officials. The four departments—Agriculture (USDA), Education (Education), Health and Human Services (HHS), and Labor (DOL)—and their selected components used guidance for multiple purposes, such as clarifying or interpreting regulations and providing grant administration information. The terminology used for agency guidance varied and agency components issued varying amounts of guidance, ranging from about 10 to over 100 guidance documents each year. The key criterion used when deciding whether to issue a regulation or guidance was whether it needed to be binding; in such cases agencies proceeded with regulation. Officials reported that they routinely consulted with legal counsel when making these choices. Departments typically identified few of their guidance documents as “significant,” generally defined by the Office of Management and Budget (OMB) as guidance with a broad and substantial impact on regulated entities. All four departments identified standard practices to follow when developing guidance. They addressed OMB's requirements for significant guidance to varying degrees and could strengthen internal controls for issuing guidance. Education and USDA had written departmental procedures for approval of significant guidance as required by OMB. DOL's procedures were not available to staff and required updating. HHS had no written procedures. Ensuring these procedures are available could better ensure that components consistently follow OMB's requirements. In the absence of specific government standards for non-significant guidance—the majority of issued guidance—the application of internal control standards is particularly important. The 25 components GAO reviewed addressed some control standards more regularly than others. For example, few components had written procedures to ensure consistent application of guidance processes. All components could describe standard review practices and most used tools to document management approval of draft guidance. Of the 25 components, 15 cited examples in which they conferred with external nonfederal stakeholders while developing guidance and nearly half did not regularly evaluate whether issued guidance remained current and effective. Components used different strategies to disseminate guidance and all relied primarily on posting the guidance on their websites. As such, components should follow applicable requirements for federal websites. One of these requirements—easy access to current and relevant guidance—could also facilitate opportunities for affected parties and stakeholders to provide feedback on those documents. USDA, DOL, and Education posted their significant guidance on a departmental website as directed by OMB; HHS did not. Components used several strategies—including organizing guidance by audience or topic and highlighting new or outdated guidance—to facilitate access. However, GAO identified factors that hindered online access, including long lists of guidance and documents dispersed among multiple web pages. All components GAO studied collected web metrics and many used them to evaluate online guidance dissemination. However, many of these components did not use metrics to improve how they disseminated guidance through their websites. Beyond their websites, components found other ways to disseminate and obtain feedback on issued guidance, including focus groups, surveys, and direct feedback from the public at conferences, webinars, and from monitoring visits. GAO is recommending that HHS and DOL ensure consistent application of OMB requirements for significant guidance. GAO also recommends that USDA, Education, HHS, and DOL strengthen the use of internal controls in guidance production processes and improve online guidance dissemination. USDA, Education, HHS and DOL generally agreed with the recommendations.
The Department of Defense (DOD) operates a worldwide logistics system to buy, store, and distribute inventory items. Traditionally, the Defense Logistics Agency (DLA) buys and stores consumable items (such as food, clothing, and hardware supplies) in large quantities until they are needed by the military services. Faced with increasing costs associated with acquiring supplies, and increasing competitive pressures, some private sector companies have developed new inventory management practices to reduce inventories and operating costs. Over the past 5 years, we have compared DOD’s inventory practices for these items to the practices of a number of progressive private sector firms with similar operations. The Secretary of Defense created DLA in 1962 to be the wholesale manager of “consumable supplies” commonly used by the military services, other DOD components, and federal agencies. Consumable supplies are items discarded after use rather than repaired. Through its wholesale system, DLA manages 3.6 million of DOD’s 4.4 million consumable items. For about 30 years, DLA has generally bought items in large quantities, stored the items until the services requested them, and then shipped them to the services’ retail facilities. To receive, store, and issue these items and other inventories to the military services and other DOD organizations throughout the world, DLA maintains over 1,400 warehouses at 27 distribution depots, which are DOD facilities with several large warehouses that store a variety of supplies, and also uses other storage locations. The locations of the 27 distribution depots are shown in figure 1.1. Consumable items are classified as hardware (construction, electronics, general, and industrial) and personnel (clothing, food, and medical) items. As of June 1994, hardware items accounted for 77 percent of DLA’s wholesale inventory, and personnel items accounted for 23 percent (see fig. 1.2). DLA reported that its June 1994 inventory of consumable items was about $10.2 billion. Electronics ($2.2) General ($2.2) Industrial ($1.5) During fiscal year 1994, DLA sold $5.6 billion of consumable items to the services and other federal agencies. The services use large amounts of these items in their peacetime operations. For example, the services operate about 25 industrial centers where large amounts of maintenance and repair supplies are used for regularly scheduled maintenance of equipment and weapon systems. The services operate 14 recruit induction centers and over 300 military exchange stores that issue clothing items to military personnel. In addition, large quantities of medical supplies and food are consumed annually at U.S. military hospitals and troop dining halls. The services at the retail facilities usually store these items at different locations until they are needed by service personnel, who are the ultimate end-users. Like DOD, private sector companies use similar consumable supplies in their day-to-day operations. In addition, both DOD and the private sector have a common requirement to control costs while meeting customer needs. During the 1980s, competition and increasing inventory costs forced many private sector companies to assess inventory management practices and to adopt new methods that reduced inventories and associated operating costs. Recognizing that inventories can be reduced without affecting supply availability, private sector companies began to change the way they buy, store, and distribute large quantities of supplies. The private sector has tried new techniques on consumable items because these items are generally standard and low unit cost, are commonly stocked by several suppliers, and are used in large quantities. Many private sector facilities are analogous to the services’ facilities in DOD’s supply system. To identify ways for DOD to reduce inventory costs, while maintaining quality service, over the past 5 years we have compared DOD’s logistics practices involving DLA-managed consumable items with those of companies that have adopted best practices for their logistics operations. Best practices are often defined as implementation of the most efficient means of inventory management to improve cost savings for the organization and enhance levels of service for the customer. On December 3, 1993, the former Chairman (now ranking minority member), Subcommittee on Oversight of Government Management and the District of Columbia, Senate Committee on Governmental Affairs, asked us to review DOD’s efforts to adopt best practices for consumable items managed and distributed by DLA. Specifically, the former Chairman asked us to address DOD’s progress in adopting the practices recommended in our prior five reports in which we compared DOD’s logistics practices with those of the private sector. This report summarizes our past reviews and addresses (1) the extent to which DOD has adopted the specific practices we recommended, (2) the savings and benefits being achieved through the use of these practices, and (3) DOD’s overall progress in improving consumable item management. To obtain information on DOD’s logistics practices, policies, and procedures, we contacted officials from the following organizations: Office of the Deputy Under Secretary of Defense, Logistics, Washington, Office of the Assistant Secretary of Defense, Health Affairs, Washington, Headquarters, Defense Logistics Agency, Alexandria, Virginia; Defense Construction Supply Center, Columbus, Ohio; Defense Industrial Supply Center, Philadelphia, Pennsylvania; Defense Personnel Supply Center, Philadelphia, Pennsylvania; Defense Electronics Supply Center, Dayton, Ohio; and Defense Distribution Depot Susquehanna, Pennsylvania. Our discussions focused on (1) the inventory management practices that DOD is using for consumable items; (2) commercial practices, programs, and tests underway or planned to improve DOD operations and reduce costs; and (3) DOD officials’ positions on the use of best practices as alternatives to traditional DOD inventory practices. We reviewed and analyzed detailed information on past, present, and future annual usage amounts; inventory levels; and other related inventory factors, such as calculations of days of supply on hand. Days of supply is a measure of how efficiently a business manages its inventory investment and is calculated by dividing the aggregate inventory by the sales per day at cost. To determine the nature and extent of DOD’s progress in adopting best practices, we visited the following military organizations: Walter Reed Army Medical Center, Washington, D.C.; Kenner Army Community Hospital, Fort Lee, Virginia; Blanchfield Army Community Hospital, Fort Campbell, Kentucky; Malcolm Grow Air Force Medical Center, Andrews Air Force Base, Langley Air Force Hospital, Langley Air Force Base, Virginia; Portsmouth Naval Hospital, Portsmouth, Virginia; Director of Logistics, Fort Lee, Virginia; and Air Force Services, Langley Air Force Base, Virginia. These organizations are involved in initiatives that are to improve DOD’s logistics operations. At these locations, we discussed with logistics personnel and medical and food service end-users the results of the initiatives and the impacts on supply operations and customer satisfaction. We also discussed with private sector companies that act as prime vendors to these organizations the initiatives, the impact on their operations, and the feasibility of adopting these programs to encompass a greater part of DOD’s operations. These companies included (1) McKesson Drug Company, Landover, Maryland; (2) Owens & Minor Corporation, Savage, Maryland; (3) Bergen Brunswig Corporation, Norfolk, Virginia; and (4) Sandler Foods, Virginia Beach, Virginia. In addition, we visited the Vanderbilt University Medical Center, Nashville, Tennessee—highlighted in our 1991 report because of its leadership in hospital logistics management—to discuss the steps it has taken to improve its logistics operations since 1991 and any planned changes to adopt new and best inventory management practices. We conducted our review from January 1994 to December 1994 in accordance with generally accepted government auditing standards. We previously reported that DOD’s large inventory levels reflect DOD’s management practice of buying and storing supplies at both wholesale and retail locations to ensure they are available to customers—sometimes years in advance of when actually needed. Storing inventory at many different locations results in inventory that often turns over slowly, thereby producing large amounts of old, obsolete, and excess items. In the private sector, many companies avoid these types of problems by using inventory practices that shift responsibilities for storing and managing inventory to suppliers. In fact, companies that are using the most aggressive practices no longer store inventory in intermediate locations at all; their suppliers deliver inventory only when needed. As a result, we recommended that DOD test the applicability of using similar types of private sector practices as an alternative to its current multilayered system. DOD concurred with our recommendation, but implementation progress and results have varied with the different commodities. Specific details on DOD’s progress in adopting these best practices are described in chapter 3. At a time when many private sector companies were adopting practices that reduced intermediate storage locations while improving service levels, DOD continued to use a logistics system that stored duplicative inventories between suppliers and end-users. DOD often stored inventories in as many as four different layers. In 1992, at the wholesale level, DLA stored $11 billion worth of consumable items in distribution depots and warehouses located throughout the United States. Our analysis indicated that this inventory could last DOD an average of over 2 years based on past demand. At the retail level, the services hold inventory at three different layers near the locations where the items are used—base warehouses, central storerooms, and end-user locations. Service facilities we visited had retail stock on hand sufficient to last from 1 month to over 5 years. Several factors contribute to this multilayered system, including DOD’s philosophy of relying on large stock levels at wholesale and retail locations to readily meet customer needs and the long procurement lead times that require DOD to hold inventory to ensure items are available until ordered supplies are received from suppliers. The private sector provides a sharp contrast to DOD’s methods of managing and distributing consumable items. Private sector companies have modified their inventory practices to reflect an increasingly competitive business environment. Companies have reduced their number of suppliers by establishing a close partnership with only a few key ones that are often close to the end-users’ facilities. Typically, suppliers are contracted to manage and distribute a company’s supplies for a particular commodity or class of items. By using aggressive direct delivery arrangements with the suppliers, companies bypass the need for intermediate storage and handling locations. Once end-users order supplies, the suppliers deliver the items directly to the end-users’ facilities close to the time the items are needed, commonly called “just-in-time.” This technique allows companies to eliminate unnecessary inventory layers and reduce on-hand inventories and operating costs. Figure 2.1 compares DOD’s multilayered system with a just-in-time inventory system. Private sector companies that remain competitive in today’s marketplace are able to hold less inventory, fill orders more quickly, turn stock over more frequently, and obtain replenishment supplies significantly faster than DOD. For example, figure 2.2 illustrates that for clothing stocks, DOD’s multilayered system results in added procurement and distribution time and increased inventories at all levels. Since 1991, we have issued five reports that compared commercial logistics practices with similar DOD operations for consumable items. In these reports, we described various innovative private sector inventory management practices and highlighted companies that had successfully used the practices to streamline their operations. Some companies indicated that they had achieved large savings for personnel items by standardizing items, eliminating bulk storage locations, and most importantly, relying on prime vendors that procured items from many sources, and then warehoused and distributed these items to their clients when and where needed. Other companies adopted practices that located suppliers at a supplier park near industrial centers, which allowed hardware items to be delivered directly to end-users frequently and regularly. As a result, we recommended that DOD conduct tests to demonstrate the applicability of using concepts such as prime vendors and supplier parks to better manage inventories at DLA and military facilities. Summarized below are the key issues addressed in our reports on medical items, food, clothing and textiles, maintenance and repair items, and electronics items and the actions DOD has taken in response to our recommendations. We reported that DOD’s health care system could save millions of dollars by increased use of inventory management practices pioneered by leading civilian hospitals. Military medical facilities and warehouses we visited held multiple layers of supplies to satisfy peacetime requirements and initial supplies for wartime requirements. The hospital warehouses we visited held inventory that would last for approximately 1 to 3 months. In addition, DLA, through its depot system, stored another layer of supplies that would last approximately 8 months. Some items in this system were packed in the 1940s and 1950s. Figure 2.3 shows the medical inventory in one warehouse at the DLA depot near Mechanicsburg, Pennsylvania. In contrast, very progressive civilian hospitals maintained much smaller levels and fewer layers of supplies and had no depot system. These hospitals, through improved ordering systems, standardization of supplies, and better communication with vendors, greatly reduced inventories by having prime vendors deliver supplies where and when needed. One hospital, the Vanderbilt University Medical Center, reduced inventory levels from $4.5 million to $2.8 million (38 percent) between 1986 and 1991 by taking an aggressive approach to inventory management, including requiring its two prime vendors to deliver supplies within 4 hours of ordering. Due to the significant opportunities for DOD to save millions of dollars, we recommended in our December 1991 report on medical supplies that DOD test the use of inventory management practices pioneered by leading civilian hospitals such as using prime vendors to deliver supplies directly to medical facilities. DOD agreed with our recommendation and has established prime vendor programs at about 150 military hospitals nationwide. We reported that, while making some limited use of prime vendors, DOD’s food system was generally outmoded and inefficient. DOD routinely stored large stocks of food throughout the military supply system. As of the end of fiscal year 1992, for example, DLA had, on average, an 82-day supply of food for peacetime operations, valued at more than $150 million. Military base warehouses also held large food inventories, worth about $200 million. As a result, food items often sat on shelves for months or even years before reaching end-users. When this occurs, food items can spoil and become unfit for human consumption. During visits to military installations, we found numerous items that had an extended inspection date—the date food producers stamp on their products to indicate when the first signs of deteriorating food quality may be detected. The private sector avoided many of the problems experienced by the military food supply system by relying on prime vendors to move food from suppliers to end-users. Because of heavy competition within the food industry, prime vendors had a financial incentive to cut their costs, keep their prices low, and provide quality service. Prime vendors deal with end-users (such as hospitals, restaurants, and hotels) on a daily basis, including taking orders and making direct deliveries. By relying on the prime vendors, end-users do not incur the direct costs of holding, handling, and transporting food. As a result, we believed many of the costs DOD incurred managing food inventories were unnecessary because the commercial distribution network could supply food to DOD much more efficiently. Because of the possible inventory reductions and cost savings that could be achieved by using private sector techniques, we recommended in our June 1993 report that DOD conduct a demonstration project of an expanded use of prime vendors delivering food directly to military dining facilities. As we recommended, DOD is currently testing the expanded use of food prime vendors to over 200 military dining facilities in the southeastern part of the United States. At a time when private sector companies were cutting costs by minimizing inventories, DOD continued to store redundant levels of clothing and textile inventories at both wholesale and retail locations and to hold such inventories for longer periods of time than the private sector firms we visited. As previously shown in figure 2.2, private sector distributors maintain low inventories because they depend on suppliers delivering goods when they are needed. These inventory differences reflect contrasting approaches to meeting customers’ needs. DOD’s system attempts to satisfy customer needs by having large clothing stocks readily available. Commercial firms, on the other hand, rely on prime vendors to manage their clothing inventories. Prime vendors use quick order and delivery systems to satisfy customer demands, relieving the need for large inventories and helping to avoid items deteriorating or becoming obsolete before they are used. One prime vendor we visited managed an agency’s employee uniform program through a central database, which enabled order information to be transmitted directly to a distribution point and issued to a customer in a few days. The agency estimated it had saved at least 15 percent of the amount it had allocated for clothing items over the previous year. Because DOD’s inventory practices for clothing items differed significantly to best practices used in the private sector, we recommended in our April 1994 report on clothing and textiles that DOD conduct a pilot project to demonstrate whether a prime vendor concept is beneficial in providing clothing items to military installations, particularly recruit induction centers. DOD agreed with our recommendation to test the prime vendor concept, and it expects to begin a demonstration project in January 1996. At both wholesale and retail locations, DOD stored duplicate maintenance and repair inventories that could be reduced using commercial practices.For example, about $2 billion of DLA’s $6.4 billion inventory of construction, general, and industrial supplies was invested in consumable maintenance and repair items. Some of the items in this wholesale level inventory were duplicated at the services’ retail inventories, which were at or near each of their industrial centers. For three industrial centers, one from each military service, we combined the number of days each center would take to use the available DLA wholesale and service retail inventories (see fig. 2.4). As figure 2.4 shows, the combined inventory could last as long as 7-1/2 years (2,696 days) at the Oklahoma Air Logistics Center. In contrast, two private sector companies we visited adopted unique inventory management practices to reduce maintenance and repair inventories and save operating expenses. These companies do not use a wholesale system to store and distribute items, as DOD does, but instead rely on suppliers to deliver items directly to the end-users’ facilities. One company we visited—PPG Industries — established a “supplier park,” where 10 of its suppliers provided maintenance and repair items as needed throughout each day. This park is located in a central location 600 yards from PPG’s industrial center where the items are used. With the supplier park concept, PPG eliminated $4.5 million, or 80 percent, of its maintenance and repair inventory and saved $600,000 annually in operating costs. Another company, The Timken Company, reduced inventory levels at one location by $4 million, or 33 percent, by using direct delivery programs and customized agreements with suppliers. Timken has set a goal to reduce its inventory an additional 50 percent by establishing a supplier park facility. In our June 1993 report on maintenance and repair inventories, we recommended DOD test best practices at its military industrial centers where large quantities of these same items are used. Specifically, we recommended that DOD test the use of supplier parks to reduce the need to store supplies in the DLA depot system and eliminate unnecessary retail inventories. DOD concurred with our recommendations and stated it planned to expand, where appropriate, the use of commercial practices. Also, DOD stated that it would investigate the applicability of using supplier parks and would determine whether a test was feasible by the second quarter of fiscal year 1994. As of July 1995, DLA had not completed its investigation into the feasibility of testing this concept. We reported that DLA stored over $2 billion of wholesale electronics supplies at distribution depots and other storage locations. This large inventory turns over slowly, about once every 4 years on average, whereas, private sector suppliers often turn their stock over 4 times a year. The slow turnover of inventory costs DOD millions of dollars. Based on DLA’s September 1993 electronics inventory, we estimated DOD’s annual inventory cost to be as much as $330 million. In addition, a significant amount of DLA inventory exceeded DOD’s needs. As of September 1993, DLA categorized $231.4 million of its electronics, or 10.5 percent, as excess inventory. During the past decade, many private sector companies adopted modern inventory management practices that significantly reduced electronics inventories, decreased procurement lead times, and saved millions in associated operating costs while improving the availability of stock. One company we visited, Bethlehem Steel Corporation, reduced its maintenance and repair inventory, including electronics, by about $16 million, or 71 percent (see fig. 2.5), and estimated that it had avoided over $47 million in inventory costs and related expenses since 1984. By establishing long-term agreements with 21 key suppliers and giving them the responsibility to manage, deliver, and stock items at over 60 end-user locations within its facility, Bethlehem Steel (1) eliminated the need to store and distribute supplies from a central warehouse, (2) increased its access to suppliers’ technical expertise, and (3) consolidated and standardized the types of items used. We believed that DOD could reduce electronics inventories by using private sector techniques, but because DOD stated that it planned to determine the feasibility of concepts like supplier parks for maintenance and repair items, we did not make recommendations in our 1994 report on electronics inventories. As discussed earlier, however, DLA has not tested or implemented this concept. DOD has made progress in improving its logistics system by adopting best practices, but further opportunities exist to achieve significant savings. For the personnel items it manages, which account for 23 percent of DLA’s inventory value, DLA has made the most progress to adopt best inventory practices by establishing prime vendor programs that have reduced inventories and improved service to the customer. As a result of its improvement efforts, DLA expects to reduce its inventories and days of supply of personnel items by over 50 percent between 1992 and 1997. For the hardware items it manages, which account for 77 percent of DLA’s inventory value, DLA has made the least amount of progress in using best practices. To date, DLA has not tested the most innovative just-in-time concepts we have seen used by private sector companies to reduce inventories and costs. DLA has focused its efforts on direct delivery programs at the wholesale level, which do not provide the same benefits as concepts like supplier parks. As a result, DLA estimates it will achieve about a 20-percent reduction in inventory levels and days of supply of hardware items between 1992 and 1997. Even then, hardware inventory in its depot system could last for more than 2 years. During the past 2 years, the Office of the Secretary of Defense (OSD) issued guidance to DLA and the services that emphasized the use of alternatives to DOD’s traditional logistics system to improve operations. In January 1993, OSD issued a policy stating that all DOD components are to employ direct delivery from vendors to end-users wherever it is cost-effective and responsive to the end-users’ requirements. The policy also stated that the use of existing commercial distribution systems shall be maximized, when possible. In 1994, OSD issued its logistics strategic plan, which focused on achieving improvements in logistics system performance while reducing associated infrastructure costs. According to the plan, an important aspect of achieving these improvements will involve the identification and adoption of successful government and commercial practices. The plan, for example, specifically states that DOD components should (1) implement commercial distribution of food to DOD dining facilities by September 1996 and (2) use direct delivery methods for supplying all routine medical and clothing needs by September 1997. The plan also states that DLA should determine the feasibility of establishing supplier parks or other commercial arrangements at one or more major defense installations by July 1994. To further its adoption of commercial practices, DLA established a program called “Buy Response Vice Inventory” in December 1992. This program, intended to minimize operating and inventory costs, encourages inventory managers to use commercial practices, such as long-term contracts, electronic data interchange systems, direct vendor delivery, and prime vendor programs. DLA also established the following goals for this program: (1) 50 percent of sales will use direct delivery and prime vendor programs by fiscal year 1997, (2) 80 percent of dollars obligated will be under long-term contracts by fiscal year 1997, and (3) 70 percent of orders with suppliers will be electronically transmitted by 1995. DLA has made the most progress in adopting best practices for personnel items. Since 1993, DLA has taken steps that use prime vendors to supply personnel items directly to military facilities. At present, DLA is establishing agreements with prime vendors to manage, store, and distribute pharmaceutical products, medical supplies, food, and clothing and textile items. DLA plans to have prime vendor programs to encompass a large portion of its total logistics operations for personnel items by 1997. As a result, DLA expects the use of prime vendors and other inventory concepts that eliminate obsolete and unnecessary items to significantly decrease its wholesale personnel inventories. By 1997, DLA estimates its 1992 inventory level of these items will have decreased from $3 billion to $1.4 billion, a 53-percent reduction. As of June 1994, DLA’s personnel inventories were valued at $2.3 billion. Based on DLA’s projected dollar-value sales for personnel items, we believe this reduction in inventory will significantly decrease the number of days of supplies on hand in DLA depots (see fig. 3.1). As figure 3.1 illustrates, medical inventory levels are expected to decrease from 176 days to 77 days of supply; food inventory levels are expected to decrease from 82 days to 20 days of supply; and clothing and textile inventories are expected to decrease from 725 to 365 days of supply. In January 1993, DLA implemented a prime vendor program for pharmaceutical products, followed in June 1993 by a program for medical and surgical supplies. By the end of 1995, about 150 DOD medical facilities will be using these prime vendor programs in 21 geographic regions across the United States. A prime vendor is a distributor that has been awarded a contract to store and distribute various medical products to individual military hospitals, which reduces the need for DOD wholesale and retail systems. Under this concept, DLA negotiates prices for medical products directly with the manufacturers or suppliers. DLA then contracts with the prime vendor to buy the products at these prices and distribute the products directly to the military hospital within 24 hours of receiving an order. In most cases, the prime vendor charges a distribution fee for these services. Once the products are delivered to the hospitals, DLA pays the prime vendor within 15 days. The use of this concept has allowed DOD to reduce stock levels at both wholesale and retail locations. During fiscal year 1994, DLA reduced wholesale pharmaceutical inventory $48.6 million (49 percent), and it estimates it will reduce its total medical inventory (which includes pharmaceutical, surgical, and dental supplies) by $97 million (37 percent) over the next 3 years. The prime vendor program also enables DOD hospitals to reduce retail inventory levels and save millions in operating costs. For example, we compared the pharmaceutical inventory levels of three military hospitals before and after the prime vendor program was started and found that these hospitals had reduced their total pharmaceutical inventory levels (see fig. 3.2). Other DOD hospitals have achieved similar reductions. In addition to a 48-percent reduction in pharmaceutical inventories ($3.8 million), Walter Reed officials estimate it saves over $6 million a year in related inventory management expenses by using the prime vendor system. This savings involve (1) $2.2 million through reduced paperwork and administrative costs, (2) over $2.9 million because its products generally cost less, (3) $504,000 through the reduction of 15 personnel positions previously required for inventory management, and (4) $397,000 by reusing material handling equipment and warehouse space previously needed for medical supplies. Figure 3.3 shows a medical warehouse that has been converted into a training facility for medical personnel. While DOD has achieved inventory reductions and cost savings, further opportunities exist to build on this progress by adopting the most aggressive practices being used in industry. Because each service uses this program differently, and some continue to retain unnecessary inventory layers, DOD has not realized the same benefits that civilian hospitals have achieved. Military hospitals we visited still held inventories that could last the hospitals an average of 42 to 82 days. In comparison, the Vanderbilt University Medical Center reduced or eliminated unnecessary inventory layers and decreased hospital inventory levels to between 2 and 15 days of supply by taking an aggressive approach to inventory management. Vanderbilt, by establishing a close partnership with its prime vendors, arranged for supplies to be delivered several times a day, in many cases, directly to the point of use within the medical center. If DOD hospitals were to use a similar system, they could reduce inventories, in some cases, by as much as 75 percent. During 1994, DLA initiated a prime vendor program that it estimated would reduce the amount of food supplies stored in DLA depots by $69 million, or 74 percent of peacetime stocks, by 1998. In March 1994, DLA awarded a contract to a full-line food distributor to serve as a prime vendor for delivering certain foods (semiperishable, chilled, and frozen) to dining facilities at four military bases in the Norfolk, Virginia, area. DLA established this pilot project to determine the feasibility of using prime vendors instead of the traditional military supply system. The project’s goals were to improve service and reduce costs and inventories. Four months after the project was started, two service facilities we visited—Fort Lee Army Base and Langley Air Force Base—had realized inventory reductions of over $165,520 (84 percent) and $66,005 (81 percent), respectively, at their retail warehouses. At one of these facilities, service officials were able to vacate two warehouses that previously were required to store food items. Officials we spoke with were more satisfied with the delivery service provided by the prime vendor than that provided by the military food supply system. For example, the prime vendor can deliver food to the dining facilities about 4 weeks faster than DOD can under its traditional system. In January 1995, DLA and the services began a test project to expand the food prime vendor program to over 200 military dining facilities located in the southeastern part of the United States. All four services are participating in the 1-year project, which includes military facilities located in metropolitan and rural areas as well as Navy ships. Through this project, military personnel electronically order food items from a distributor who is required to deliver the items to the dining facility within 48 hours. As a part of this test, DOD will measure the monetary cost and benefits, customer satisfaction, and distributor performance by comparing the traditional military food supply system to the prime vendor method. If this project is successful, DOD plans to expand the program to all dining facilities in the continental United States and ships by the end of fiscal year 1997. Under a demonstration project that is expected to start in January 1996, a military service recruit induction center will test the prime vendor concept for clothing items. The project is DOD’s first effort to use a prime vendor to manage clothing and textile items. DLA plans to award a contract to a prime vendor that will be responsible for the manufacture, quality, storage, and delivery of all clothing items provided to the Air Force’s recruit induction center at Lackland Air Force Base, San Antonio, Texas. Items are expected to be delivered within 10 days after the orders are placed, with a delivery goal of 3 days by the 10th month of the program. If the test proves successful, DOD plans to apply it to other service locations. DLA’s adoption of best practices is least advanced for hardware items. Although DLA is examining the potential application of some commercial practices for hardware items, DLA’s overall progress is slow and results are limited. The central focus of DLA’s efforts to improve management of hardware items has been to expand the use of direct delivery programs at the wholesale level. By using long-term contracting agreements and electronic data interchange systems, DLA makes arrangements with manufacturers or suppliers to deliver items directly to the services’ retail facilities. DLA expects the use of direct vendor delivery to result in better service and reduced inventory levels and to eliminate the cost of receiving, storing, and issuing these items from wholesale depots. In fiscal year 1994, DLA reported that 13 percent of its hardware dollar-value sales resulted from direct delivery programs at this level. According to DLA, it expects to increase the use of direct delivery programs so that they account for 30 percent of all hardware dollar-value sales by 1997. These direct delivery programs, however, neither eliminate costs to manage, store, and distribute items at a service’s retail location nor provide the same quick response achieved through best inventory practices such as the supplier park concept. With direct delivery programs, requisitions are sent from the services to DLA, where the orders are then electronically relayed to the supplier or manufacturer. The length of time from requisition to delivery of the item can take 30 days or longer. In comparison, private sector companies using just-in-time techniques often receive supplies from their key suppliers within hours after ordering. The use of such techniques allows these companies to reduce or eliminate the need to manage, store, and distribute these items from warehouse locations at their facilities. DLA’s direct delivery programs provide only incremental reductions from hardware inventory levels held in 1992. DLA estimates that hardware inventories of $8 billion in 1992 will decrease to $6.4 billion in 1997, or approximately 20 percent. As of June 1994, DLA’s hardware inventories were valued at $7.8 billion. Much of this overall reduction will result from the disposing of excess and obsolete items. Even then, based on DLA’s projected inventory and demand for the four groups of hardware items (construction, electronics, general, and industrial), it could have enough inventory on hand to last for an average between 795 days and 1,381 days (see fig. 3.4). In comparison, private sector suppliers of similar types of items often hold only 90 days of supply to meet customer needs. Companies have applied innovative practices at industrial centers where they use large quantities of consumable hardware items, such as bearings, valves, and fasteners, to maintain and repair their equipment. According to private sector officials, these items offer the greatest opportunity because they are generally standard, used in large quantities, and commonly stocked by several suppliers. Through concepts like supplier parks, companies have reduced similar types of inventories and the costs to manage and store them. For example, PPG Industries and Bethlehem Steel Corporation have used supplier parks and similar just-in-time concepts to eliminate as much as 80 percent of their consumable item inventories. In addition, these companies estimate that they have saved millions in related operating costs. In September 1994, DLA contracted for a study of the feasibility of using supplier parks at three maintenance and repair facilities. In December 1994, the contractor reported that the concept of a supplier park was feasible and strongly recommended that DLA give serious consideration to using this concept at these facilities. The study concluded that a supplier park concept would provide enhanced customer support in terms of quicker response times and improved material availability while reducing both operating and inventory costs at DLA depots and maintenance and repair facilities. The contractor estimated a first year savings between $9.9 million and $16.8 million to DOD. According to DLA, it plans to continue studying the feasibility of this concept. The success of DOD’s initiatives to improve inventory management practices will depend on DOD’s ability to change its traditional inventory management philosophy. Private sector companies have moved away from this philosophy in an attempt to lower the cost of doing business, provide better service, and remain competitive. Private sector companies have changed their philosophy by relying on established commercial distribution networks where suppliers have a financial incentive to make their operations efficient and achieve savings. In response to competition, these suppliers must be extremely cost conscious and responsive to their customers in order to stay in business. DOD can achieve similar results by adopting a philosophy in which it relies more on the existing distribution network that supports the private sector. Another factor is that DOD will be continuously challenged to identify and adopt the best practices being used to improve logistics operations. Private sector companies that have achieved the greatest success continually seek new and better practices to further improve operations. For example, Vanderbilt University Medical Center officials told us that they plan to take the next step with their inventory system by controlling inventory consumption through closer supply arrangements with its prime vendor. According to these officials, the latest concept involves a risk-sharing strategy designed around the development of an agreed upon cost per medical treatment. Under this arrangement, the doctors, nurses, and the prime vendor agree on a standard medical supplies listing for each type of medical treatment. When a patient is admitted to the hospital for treatment, the prime vendor will be notified and a standard set of supplies will be delivered to the patient’s floor. This concept is expected to minimize the amount of supplies used by the hospital staff to that amount necessary to perform the required treatment. The monetary risks and benefits of this arrangement are shared by both the prime vendor and the hospital. DOD has made progress in adopting best inventory practices for consumable items, but it could do more to further reduce inventories and operating costs. Private sector companies that have successfully adopted innovative just-in-time concepts for consumable items have taken several key steps. First, they selected high usage, low unit cost items to test these practices. Second, they arranged for suppliers to make small, frequent deliveries of these items to a point close to where the items are used. As they gained confidence in this new system and as suppliers refined the distribution process, the companies expanded this service to include a wider range of items. Finally, they established a close partnership between the supplier and end-user, thereby enabling the supplier to use its expertise and the pre-existing distribution network to maximize savings for its customers. As discussed in chapter 2, these savings can result in as much as an 80-percent reduction in inventory levels and in millions of dollars in lower annual operating costs. For personnel items, which represent 23 percent of DLA’s inventory, DLA is beginning to take similar steps. The most successful program to date is the pharmaceutical prime vendor program, which is now in place at DOD hospitals nationwide. Thus far, DLA’s implementation of the prime vendor concept has reduced wholesale pharmaceutical inventories by 49 percent. At the retail level, one of the most aggressive DOD medical facilities applying this concept is the Walter Reed Army Medical Center, which has reported an inventory reduction of $3.8 million and a estimated savings of over $6 million annually in related inventory management expenses. To accomplish these savings, Walter Reed has turned over much of its inventory management responsibilities to the prime vendor. However, the aggressive steps taken by Walter Reed are not typical of those throughout all of the DOD medical facilities we visited. Because each service has used the prime vendor’s services differently, and some even continue to retain unnecessary inventory layers, DOD has not realized the same improvements that progressive civilian hospitals have achieved. In fact, military hospitals we visited still held inventories that could last for many weeks compared to just days for some leading civilian hospitals. DOD’s approach neither eliminates unnecessary inventory storage locations nor allows hospitals to fully use the inventory management expertise of the prime vendor. Under this approach, a partnership environment may not fully develop. For hardware items, which represent 77 percent of its inventory and a $7.8-billion investment, DLA has not taken the steps necessary to adopt just-in-time concepts. DOD operates industrial facilities that use large quantities of hardware items to repair and maintain aircraft, land vehicles, and ships. At these facilities, DOD uses the same types of hardware items for which private sector companies have established supplier parks and other innovative concepts that transfer management responsibilities to their suppliers and reduce inventory levels. Although we recommended a DOD test of the supplier park concept in our June 1993 report, it is still studying the feasibility of the concept 2 years later. To ensure that the medical prime vendor programs are more consistently and aggressively applied, we recommend that the Secretary of Defense direct the secretaries of each of the military services to use the prime vendor program in enhancing the partnership between the DOD medical facilities and the prime vendors. Actions that would enhance such an arrangement include delivering supplies directly to the point of use in the hospital, integrating the vendor into the day-to-day supply operations, delivering supplies on a frequent basis (several times a day), and using the vendor’s expertise to improve inventory management operations. These actions could further reduce inventory layers in the DOD system and bring military medical facilities closer to the levels of success achieved by progressive private sector hospitals. To encourage DLA and the services to establish supplier parks for hardware items, we recommend that the Secretary of Defense direct the secretaries of each of the military services and the Director of DLA to initiate several actions that have been taken by private sector companies when developing these concepts. Specifically, each service and DLA should work together to identify an Army, Navy, and Air Force site that will test the supplier park concept; identify specific items with high usage rates and low unit costs to use in a test program; negotiate with prospective suppliers to perform these inventory management functions during the test program; establish partnerships with the suppliers allowing them access to DOD inventory information and facilities, which will enable the suppliers to deliver items directly to the end-user; develop evaluation criteria that will measure the total costs of inventory delivered under the test program in order to compare costs and benefits to the total cost being incurred under the current system; and establish aggressive milestones for the initial phases of this test program to achieve early results. In commenting on a draft of this report, DOD generally agreed with the findings, conclusions, and recommendations and stated that, while significant gains have occurred, further progress can be made in adopting best commercial practices in providing both personnel and hardware items to the military services. DOD stated that DLA, in its role as commodity manager for medical supplies, is taking the lead in enhancing prime vendor arrangements for medical items. DOD also stated that DLA is aggressively pursuing the establishment of a supplier park at two military service facilities in Texas. According to DOD, DLA has contracted with a consultant to complete an analysis of existing purchasing systems, compile an economic analysis, and develop an implementation plan. The consultant is expected to complete a report by August 31, 1995. DLA has not yet established specific milestones to test the operation of the concept. An actual operational test of the concept can help DLA (1) determine whether the concept can be applied to its logistics operations and (2) evaluate the feasibility of expanding the concept at other locations having similar inventory requirements.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) efforts to adopt best inventory management practices, focusing on: (1) whether DOD has adopted the specific practices recommended for consumable items; (2) the savings and benefits being achieved through the use of these practices; and (3) DOD overall progress in improving consumable item management. GAO found that: (1) the Defense Logistics Agency (DLA) has taken steps to improve its logistics practices and reduce consumable inventories, although it could make further improvements with items such as bolts, valves, and fuses that cost millions of dollars to manage and store; (2) DLA inventories are expected to decrease only 20 percent by 1997, but these inventories could last over two years; (3) DLA has not tested the most innovative commercial practices of using supplier parks and other techniques that give established distribution networks the responsibility to manage, store, and distribute inventory on a frequent basis directly to end users; (4) DLA use of best inventory practices is exemplified for personnel items where prime vendors are used to supply personnel items directly to military facilities; (5) DLA expects to reduce the 1992 personnel item inventory by 53 percent in 1997; and (6) DOD hospitals still hold larger inventories than those civilian hospitals that have reduced inventories through effective partnering arrangements with prime vendors.
In the absence of international cash donation management policies, procedures, and plans, DOS developed an ad hoc process to manage the cash donations flowing to the U.S. government from other countries for Hurricane Katrina relief efforts. By September 21, about $115 million had been received and as of December 31, 2005, DOS reported that $126 million had been donated by 36 countries. Our review noted that DOS’s ad hoc procedures did ensure the proper recording of international cash donations and we were able to reconcile the funds received with those held in the designated DOS account at Treasury. Also, an NSC-led interagency working group was established to determine uses for the international cash donations for domestic disaster relief. In October 2005, $66 million of the $126 million donated had been accepted by FEMA under the Stafford Act and used for a Hurricane Katrina relief grant. As of March 16, 2006, the other $60 million from international donations remained undistributed. Once accepted by FEMA under the Stafford Act, funds would be limited to use on activities in furtherance of the act. We were told that the NSC-led interagency working group did not transfer the funds to FEMA because it wanted to retain the flexibility to spend the donated funds on a wider range of assistance than is permitted under the Stafford Act. During this period and while deliberations were ongoing, the funds were kept in an account that did not pay interest, thereby diminishing the purchasing power of the donated funds and losing an opportunity to maximize the resources available for relief. Under the Stafford Act, FEMA could have held the funds in an account that can pay interest, but Treasury lacks the statutory authority to credit DOS-held funds with interest. A number of options could be considered to address this situation if there are dual goals of flexibility and maintaining purchasing power. Table 1 below shows the dates of key events in the receipt and distribution of the international cash donations according to documentation received and interviews with DOS and FEMA officials. In early September 2005, FEMA officials identified an account at the U.S. Treasury for recording international cash donations and a number of potential uses for the donations that would help meet relief needs of the disaster. By September 21, 2005, about $115 million in foreign cash donations had been received. In a paper submitted to the NSC-led interagency working group, dated September 22, 2005, DOS recognized that every effort should be made to disburse the funds to provide swift and meaningful relief to Hurricane Katrina victims without compromising needed internal controls to ensure proper management and effective use of the cash donations and transparency. FEMA officials told us that on September 23, 2005, they had identified and proposed to the NSC-led interagency working group that the international cash donations could be spent on the following items for individuals and families affected by Hurricane Katrina: social services assistance, medical transportation, adapting homes for medical and handicap needs, job training and education, living expenses, building materials, furniture, and transportation. At NSC’s request, on October 7, 2005 FEMA presented more detailed proposals for using the foreign donations. On October 20, 2005, with the NSC-led interagency working group consensus, DOS transferred to FEMA $66 million of the international donations to finance case management services to help up to 100,000 households affected by Hurricane Katrina define what their needs are and to obtain available assistance. As of February 2006, the remaining $60 million had not been released, pending the NSC-led interagency working group determination about the acceptance and use of the remaining funds. DOS and FEMA officials told us that for the remaining $60 million in donated funds, the NSC-led interagency working group had considered a series of proposals received from a number of both public and private entities. At the time of our review, we were told that the NSC-led interagency working group decided that the vital needs of schools in the Gulf Coast area would be an appropriate place to apply the donations, and that they were working with the Department of Education to finalize arrangements to provide funding to meet those needs. FEMA officials told us that under the Stafford Act, they could use donated funds for projects such as rebuilding schools, but projects for new schools buildings are not consistent with Stafford Act purposes unless replacing a damaged one. Also, according to DHS officials, the Act would have required that receiving entities match FEMA funds for these purposes. However, because of the devastation, the entities would have difficulty matching FEMA funds, which in essence limited FEMA from doing these types of projects. According to DHS, FEMA considered whether it would be useful for donated funds to contribute to the non-federal share for applicants having trouble meeting the non-federal share, but would need legislative authority to use it to match federal funds. We contacted NSC to further discuss these matters; however NSC did not respond to our requests for a meeting. On March 16, 2006, DOS and the Department of Education signed a Memorandum of Agreement regarding the use of $60 million of the international cash donations. Advance planning is very important given the outstanding pledges of $400 million or more that DOS officials indicated may still be received. While acknowledging that the U.S. government has never previously had occasion to accept such large amounts of international donations for disaster relief, going forward, advance planning is a useful tool to identify potential programs and projects prior to the occurrence of an event of such magnitude. In the case of Hurricane Katrina, while the NSC-led interagency working group reviewed various proposals on the use of the remaining $60 million, DOS held the funds in an account at the U.S. Treasury that did not earn interest. Treasury lacks the statutory authority to credit those DOS-held funds with interest. For the time the funds were not used, their purchasing power diminished due to inflation. If these funds had been placed in an account that could have been credited with interest to offset the erosion of purchasing power, the amount of funds available for relief and recovery efforts would have increased while decision makers determined how to use them. The U.S. government would be responsible for paying the interest if these funds were held in an account at the Treasury that can pay interest. Although the Stafford Act does not apply to the donated funds maintained in the DOS account at Treasury, the Stafford Act does provide that excess funds accepted under the Act may be placed in Treasury securities, and the related interest paid on such investments would be credited to the account. Had the foreign monetary donations been placed in Treasury securities, we estimate that by February 23, 2006, the remaining funds for relief efforts would have increased by nearly $1 million. The Administration’s report, The Federal Response To Hurricane Katrina: Lessons Learned, released on February 23, 2006, recognized that there was no pre-established plan for handling international donations and that implementation of the procedures developed was a slow and often frustrating process. The report includes recommendations that DOS should establish before June 1, 2006, an interagency process to determine appropriate uses of international cash donations, and ensure timely use of these funds in a transparent and accountable manner, among others. DOS officials recognized that the ad hoc process needed to be formalized and planned to develop such procedures by June 1, 2006. When developing policies and procedures, it is important that consideration also be given to strategies that can help maintain the purchasing power of the international donations. If the goal is to maintain both purchasing power and flexibility, then among the options to consider are seeking statutory authority for DOS to record funds in a Treasury account that can pay interest similar to donations accepted under the Stafford Act pending decisions on how the funds would be used, or to allow DOS to deposit the funds in an existing Treasury account of another agency that can pay interest pending decisions on how the funds would be used. In the absence of guidance, we found a lack of accountability in the management of the in-kind assistance. Specifically, FEMA did not have a process in place that confirmed that the in-kind assistance sent to distribution sites was received. The lack of guidance, inadequate information about the nature and content of foreign offers of in-kind assistance, and insufficient advance coordination also resulted in the arrival of food and medical assistance that could not be used in the United States. Also, the ad hoc procedures created to manage foreign military donations allowed for confusion about which agency—FEMA or DOD— should accept and be responsible for oversight of such donations. Because of the lack of guidance to track assistance, USAID/OFDA created a database to track the assistance as it arrived. We found that USAID/OFDA reasonably accounted for the assistance given the lack of information on the manifests and the amount of assistance that was arriving within a short time. However, on September 14, 2005, FEMA did request USAID/OFDA to track the assistance from receipt to final disposition. However, the system USAID/OFDA created did not include confirming that the assistance was received at the FEMA distribution sites. In part, USAID/OFDA did not set up these procedures on its own in this situation, because its mission is to deliver assistance in foreign countries and it had never distributed assistance within the United States. FEMA officials told us that they assumed USAID/OFDA had these controls in place. FEMA and USAID/OFDA officials could not provide us with evidence that confirmed that the assistance sent to distribution sites was received. Without these controls in place to ensure accountability for the assistance, FEMA does not know if all or part of these donations were received at FEMA distribution sites. Internal controls, such as a system to track that shipments are received at intended destinations, provides an agency with oversight, and for FEMA in this case, they help ensure that international donations are received at FEMA destination sites. We noted that the guidance the agencies created did not include policies and procedures to help ensure that food and medical supplies that the U.S. government agreed to receive and came into the United States met U.S. standards. The lack of guidance, inadequate information up-front about the nature and content of foreign offers of in-kind assistance, and insufficient advance coordination with regulatory agencies before agreeing to receive them, resulted in food and medical items, such as MREs and medical supplies, that came into the United States even though they did not meet USDA or FDA standards and thus could not be distributed in the United States. We noted that FEMA’s list of items that could be used for disaster relief that was provided to DOS was very general and did not provide any exceptions, for example, about contents of MREs. DHS commented on our report that FEMA repeatedly requested from DOS additional information about the foreign items being offered and DOS did not respond. Both instances represent lost opportunities to have prevented the arrival of items that could not be distributed in the United States. The food items included MREs from five countries. Because of the magnitude of the disaster, some normal operating procedures governing the import of goods were waived. According to USDA and FDA officials, under normal procedures, entry documents containing specific information, which are filed with U.S. Customs and Border Protection, are transmitted to USDA and FDA for those agencies’ use in determining if the commodities are appropriately admissible into the United States. Without consultation or prior notification to USDA or FDA, the Commissioner of U.S. Customs and Border Protection authorized suspension of some normal operating procedures for the import of regulated items like food and medical supplies. Consequently, USDA and FDA had no involvement in the decision making or process of agreeing to receive regulated product donations, including MREs and medical supplies, and no opportunity to ensure that they would all be acceptable for distribution before the donated goods arrived. Both USDA and FDA, based on regulations intended to protect public health, prevented distribution of some international donations, which resulted in the assistance being stored at a cost of about $80,000. In the absence of policies and procedures, DOS, FEMA, and DOD created ad hoc policies and procedures to manage the receipt and distribution of foreign military goods and services. However, this guidance left open which agency—FEMA or DOD—was to formally accept the foreign military assistance and therefore each agency apparently assumed the other had done so under their respective gift authorities. As a result, it is unclear whether FEMA or DOD accepted or maintained oversight of the foreign military donations that were vetted through the DOS Task Force. The offers of foreign military assistance included, for example, the use of amphibious ships and diver salvage teams. FEMA did not maintain oversight of the foreign military donations that it accepted through the DOS task force. A FEMA official told us that they were unable to tell us how the foreign military donations were used because FEMA could not match the use of the donations with mission assignments it gave Northern Command. Moreover, FEMA and Northern Command officials told us of instances in which foreign military donations arrived in the United States that were not vetted through the DOS task force. For example, we were told of military MREs that were shipped to a military base and distributed directly to hurricane victims. For the shipments that were not vetted through the Task Force, DOS, FEMA, and DOD officials could not provide us information on the type, amount, or use of the items. As a result, the agencies cannot determine if these items of assistance were safeguarded and used as intended. In closing, since the U.S. government had never before received such substantial amounts of international disaster assistance, we recognize that DOS, FEMA, USAID/OFDA, and DOD created ad hoc procedures to manage the receipt, acceptance, and distribution of the assistance as best they could. Going forward, it will be important to have in place clear policies, procedures, and plans on managing and using both cash and in- kind donations in a manner that provides accountability and transparency. If properly implemented, the six recommendations included in our report issued today will help to ensure that the cognizant agencies fulfill their responsibilities to effectively manage and maintain appropriate and adequate internal control over foreign donations. Mr. Chairman, this concludes GAO’s prepared statement. We would be happy to respond to any questions that you or Members of the Committee may have. For further information on this testimony, please contact either Davi M. D’Agostino at (202) 512-5431 or dagostinod@gao.gov or McCoy Williams at (202) 512-9095 or williamsm1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this testimony included Kay Daly, Lorelei St. James, Jay Spaan, Pamela Valentine, and Leonard Zapata. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In response to Hurricane Katrina, countries and organizations donated to the United States government cash and in-kind donations, including foreign military assistance. The National Response Plan establishes that the Department of State (DOS) is the coordinator of all offers of international assistance. The Federal Emergency Management Agency (FEMA) within the Department of Homeland Security (DHS) is responsible for accepting the assistance and coordinating its distribution. GAO's testimony covers (1) the amount and use of internationally donated cash and (2) the extent to which federal agencies with responsibilities for international in-kind assistance offered to the United States had policies and procedures to ensure the appropriate accountability for the acceptance and distribution of that assistance. Because the U.S. government had not received such substantial amounts of international disaster assistance before, ad hoc procedures were developed to accept, receive and distribute the cash and in-kind assistance. Understandably, not all procedures would be in place at the outset to provide a higher level of accountability. The Administration recognized the need for improvement in its recent report on lessons learned from Hurricane Katrina. GAO was able to track the cash donations received to designated U.S. Treasury accounts or disbursed. In the absence of policies, procedures, and plans, DOS developed an ad hoc process to manage $126 million in foreign cash donations to the U.S. government for Hurricane Katrina relief efforts. As cash donations arrived, a National Security Council (NSC)-led interagency working group was convened to make policy decisions about the use of the funds. FEMA officials told GAO they had identified and presented to the working group a number of items that the donated funds could be spent on. The NSC-led interagency working group determined that use of those donated funds, once accepted by FEMA under the Stafford Act, would be more limited than the wider range of possible uses available if the funds were held and then accepted under the gift authorities of other agencies. In October 2005, $66 million of the donated funds were spent on a FEMA case management grant, and as of March 16, 2006, $60 million remained undistributed in the DOS-designated account at the Treasury that did not pay interest. Treasury may pay interest on funds accepted by FEMA under the Stafford Act. According to DOS, an additional $400 million in international cash donations are likely to arrive. It is important that cash management policies and spending plan options are considered and in place to deal with the forthcoming donations so that the purchasing power of the donated cash is maintained for relief and reconstruction. FEMA and other agencies did not have policies and procedures in place to ensure the proper acceptance and distribution of in-kind assistance donated by foreign countries and militaries. In-kind donations included food and clothing. FEMA and other agencies established ad hoc procedures. However, in the distribution of the assistance to FEMA sites, GAO found that no agency tracked and confirmed that the assistance arrived at their destinations. Also, lack of procedures, inadequate information up front about the donations, and insufficient coordination resulted in the U.S. government agreeing to receive food and medical items that were unsuitable for use in the United States and storage costs of about $80,000. The procedures also allowed confusion about which agency was to accept and provide oversight of foreign military donations. DOD's lack of internal guidance regarding the DOS coordinating process resulted in some foreign military donations that arrived without DOS, FEMA, or DOD oversight.
OSHA is responsible for protecting the safety and health of the nation’s workers under the Occupational Safety and Health Act of 1970 (OSH Act). OSHA sets and directly enforces occupational safety and health standards for the private sector in about half the states. Occupational safety and health standards are a type of regulation and are defined as standards that require “conditions, or the adoption or use of one or more practices, means, methods, operations, or processes, reasonably necessary or appropriate to provide safe or healthful employment and places of employment.” OSHA carries out its enforcement activities through its 10 regional offices and 90 area offices. The remaining states set and enforce their own workplace safety and health standards for employers under a state plan approved by OSHA. In these states, the state agency typically responsible for enforcing workplace safety and health standards is the state department of labor. OSHA conducts two types of inspections to enforce the OSH Act and its standards: unprogrammed and programmed inspections. Unprogrammed inspections are unplanned and conducted in response to certain events, such as investigating employee complaints, including claims of imminent danger and serious accidents involving fatalities, amputations, and in- patient hospitalizations. Programmed inspections are planned and target industries or individual workplaces based on predetermined criteria, such as those that have experienced relatively high rates of workplace injuries and illnesses. Among states with OSHA-approved state plans, enforcement practices may vary, but states generally are expected to use a similar approach to performing planned and unplanned inspections. The states with OSHA-approved state plans cover different types of employers in their state. Twenty-one of the states with OSHA-approved state plans are responsible for enforcing workplace safety and health laws and standards at private-sector and state and local government workplaces. Five of the states with OSHA-approved state plans cover state and local government workplaces only, with OSHA providing enforcement for the private sector (see fig. 1). Four of the nine states we reviewed–California, Maryland, Oregon, and Washington–are responsible for enforcement for the private sector and the state and local public sector under an OSHA-approved state plan. In the remaining five states–Connecticut, Illinois, Maine, New Jersey, and New York–OSHA provides enforcement for the private sector, while the state is responsible for the state and local public sector. In addition to workplace safety and health regulation by OSHA and state departments of labor, other federal and state government agencies regulate health care employers in various ways and may have requirements related to workplace violence prevention. For example, states may impose certain licensing requirements on hospitals or other health care facilities. In addition, the Joint Commission on Accreditation of Healthcare Organizations (Joint Commission), a nonprofit corporation that accredits and certifies health care organizations and programs, also has its own requirements for accreditation purposes. OSHA does not require employers to have workplace violence prevention programs; however, the agency issued voluntary guidelines in 1996 to help employers establish them. Although there is no federal occupational safety and health standard for workplace violence prevention, OSHA may issue citations to employers for violating a certain provision of the OSH Act–referred to as the general duty clause–which requires employers to provide a workplace free from recognized hazards likely to cause death or serious physical harm. To cite an employer under the general duty clause, OSHA must have evidence that (1) a condition or activity in the workplace presents a hazard to an employee, (2) the condition or activity is recognized as a hazard by the employer or within the industry, (3) the hazard is causing or is likely to cause death or serious physical harm, and (4) a feasible means exists to eliminate or materially reduce the hazard. When OSHA does not have enough evidence to support a citation, it can issue hazard alert letters that warn employers about the dangers of specific industry hazards and provide information on how to protect workers. OSHA has recordkeeping regulations that require employers to record certain workplace injuries and illnesses. For each work-related injury and illness that results in death, days away from work, restricted work or transfer to another job, loss of consciousness, or medical treatment beyond first aid, the employer is required to record the worker’s name; the date; a brief description of the injury or illness; and, when relevant, the number of days the worker was away from work, assigned to restricted duties, or transferred to another job as a result of the injury or illness. Employers with 10 or fewer employees at all times during the previous calendar year and employers in certain low-hazard industries are partially exempt from routinely keeping OSHA injury and illness records. Three federal agencies collect national data on nonfatal workplace violence in health care facilities: BLS, within DOL; NIOSH, within the Department of Health and Human Services (HHS); and BJS, within the Department of Justice (DOJ). The three agencies collect data on different types of workplace violence cases from different sources (see table 1). Workers in health care facilities experience substantially higher estimated rates of nonfatal injury due to workplace violence compared to workers overall, according to data from three federal data sets we reviewed (see fig. 2). BLS’s Survey of Occupational Injuries and Illnesses (SOII) data for 2013 show that the estimated rates of nonfatal workplace violence against health care workers in private-sector and state in-patient facilities–including hospitals and nursing and residential care facilities–are from 5 to 12 times higher than the estimated rates for workers overall, depending on the type of health care facility. More specifically, in 2013 the estimated rate of injuries for all private-sector workers due to such violence that resulted in days away from work was 2.8 per 10,000 workers. In contrast, the estimated rate for private-sector hospital workers was 14.7 per 10,000 workers, and for nursing and residential care workers the rate was 35.3 per 10,000 workers. The estimated rates of nonfatal injury due to workplace violence were highest in state hospitals and nursing and residential care facilities, according to BLS’s SOII data. Workers in these state facilities may have higher rates of workplace violence because they work with patient populations that are more likely to become violent, such as patients with severe mental illness who are involuntarily committed to state psychiatric hospitals, according to BLS research. Data from HHS’s National Electronic Injury Surveillance System-Work Supplement (NEISS-Work) data set show that in 2011 the estimated rate of nonfatal workplace violence injuries for workers in health care facilities was statistically greater than the estimated rate for all workers. Data from the National Crime Victimization Survey (NCVS) data set show that from 2009 through 2013 health care workers experienced workplace violence at more than twice the estimated rate for all workers (after accounting for the sampling error). Research also suggests that nonfatal workplace violence is prevalent in in-patient health care facilities. Although their results are not generalizable, three studies that surveyed hospital workers found that 19 to 30 percent of workers in a general hospital setting who completed the surveys reported being physically assaulted at work sometime within the year prior to each study (see app. II for more information on these studies). In addition, a study that surveyed staff in a psychiatric hospital found that 70 percent of staff reported being physically assaulted within the last year. Moreover, BLS data indicate that reported nonfatal workplace violence against health care workers has increased in recent years. Such cases reported by employers in BLS’s SOII increased by about 12 percent over 2 years, from an estimated 22,250 reported cases in 2011 to an estimated 24,880 in 2013. We also examined the estimated rates of workplace violence reported by employers in BLS’s SOII by the type of facility and found that there was relatively little change from 2011 through 2013, with the exception of a 70 incidents per 10,000 workers increase in the rate for state nursing and residential care facilities. The estimated number of health care workers reporting at least one workplace violence- related assault in BJS’s NCVS survey from 2009 through 2013 varied from year to year with no clear statistical trend (see fig. 3). Nonfatal and fatal workplace violence against health care workers involves different types of perpetrators and violence. For nonfatal violence, patients are the primary perpetrators, according to federal data and studies we reviewed. More specifically, patients were the perpetrators of an estimated 63 percent of the NEISS-Work cases where workers in health care facilities came to the emergency department for treatment after experiencing workplace violence-related injuries in 2011. Several of the studies we reviewed also found that patients were the primary perpetrators of nonfatal violence against health care workers, followed by the patient’s relatives and visitors (see app. II for more information on these studies). According to NEISS-Work data from 2011, hitting, kicking, and beating were the most common types of nonfatal physical violence reported by workers in health care facilities. As for fatal violence, the BLS Census of Fatal Occupational Injuries reported 38 workers in health care facilities died as a result of workplace violence assaults from 2011 through 2013, representing about 3 percent of all worker deaths due to workplace violence across all industries during those years. Many of the deaths in a health care setting involved a shooting, with many perpetrated by someone the worker knew, such as a domestic partner or co-worker. Health care workers we interviewed described a range of violent encounters with patients that resulted in injuries ranging from broken limbs to concussions (see table 2). Research suggests that patient-related factors can increase the risk of workplace violence. A study that surveyed over 5,000 workers in six hospitals in two states found that patient mental health or behavioral issues were contributing factors in about 64 percent of the patient- perpetrated violent events reported by health care workers who completed the survey, followed by medication withdrawal, pain, illicit drug/alcohol use, and being unhappy with care. In three of our discussion groups, health care workers said working with patients with severe mental illness or who are under the influence of drugs or alcohol contributed to workplace violence in health care facilities. Certain types of health care workers are more often the victims of workplace violence. According to BLS data from 2013, health care occupations like psychiatric aides, psychiatric technicians, and nursing assistants experienced high rates of workplace violence compared to other health care occupations and workers overall (see fig. 4). Furthermore, one study that surveyed over 5,000 workers in six hospitals in two states found that workers in jobs typically involving direct patient care had a higher percentage of physical assaults compared with other types of workers. For example, a higher percentage of nurse’s aides reported being physically assaulted within the last year (14 percent) than nurse managers (4.7 percent). Another study that surveyed over 300 staff in a psychiatric hospital found that ward staff, which had the highest levels of patient contact, were more likely than clinical care and supervisory workers to report being physically assaulted by patients. While the three national datasets we analyzed shed some light on the level of workplace violence committed against health care workers, the full extent of the problem is unknown for three main reasons: 1) differences in the criteria used to record workplace violence cases in the data sets, 2) health care workers not reporting all cases of workplace violence, and 3) employer inaccuracies in reporting cases of workplace violence. Not all workplace violence cases are included in the three national data sets we reviewed because of the criteria used by each of the data sets. With regard to the first two data sets (SOII and NEISS-Work), workplace violence that does not result in injuries severe enough to require days off from work or an emergency room visit are not included. For the NCVS data, cases that are not considered to be crimes are not included. Table 3 describes the number and types of workplace violence cases recorded in each of these datasets in 2011, the most recent year in which data were available from all three sources. Health care workers do not formally report all incidents of workplace violence for various reasons. Although the results are not generalizable, estimates of the percentage of cases that are formally reported ranged from 7 to 42 percent in the studies we reviewed (see app. II for more information on these studies). The health care workers surveyed in four of the five studies we reviewed most often reported the violence informally to their supervisors or co-workers. A study that surveyed 762 nurses from one hospital system found that the reasons health care workers provided for not formally reporting the violence included (1) not sustaining serious injuries, (2) inconvenience, and (3) the perception that violence comes with the job. Health care workers in all five of our discussion groups said that they do not report all cases of workplace violence unless they result in a severe injury. Health care workers in four discussion groups also said that they do not report all cases of workplace violence because the reporting process is too burdensome and because management discouraged reporting. Health care workers in two of our discussion groups reported fear of being blamed for causing the attack, losing their job, as well as financial hardships associated with their inability to work due to injury, as reasons for not formally reporting all cases of workplace violence. OSHA and BLS research indicate that employers do not always record or accurately record workplace injuries in general. Specifically, in a 2012 report OSHA found that for calendar years 2007 and 2008, approximately 20 percent of injury cases reconstructed by inspectors during a review of employee records were either not recorded or incorrectly recorded by the employer. OSHA is working on improving reporting by conducting additional outreach and training for employers on their reporting obligations. BLS research has also found that employers do not report all workplace injury cases in the SOII, and BLS is working on improving reporting by conducting additional research on the extent to which cases are undercounted in the SOII and exploring whether computer-assisted coding can improve reporting. There is limited information available on the associated costs of injuries due to workplace violence in health care. While DOL and HHS collect information on occupational injuries and illnesses due to violence in health care, they do not collect data on the costs. The BJS NCVS survey asks individuals about the medical expenses they incurred as a result of workplace violence; however, our analysis of the data did not identify enough cases to produce a national estimate of the costs. One of the states we reviewed, Washington, provided us with a report about the cost the state incurred due to workplace violence over a 5 year period. The state estimates between $4 million and $8 million each year from 2010 through 2014 in workers’ compensation costs for health care workers who were injured from workplace violence and received medical treatment for their injuries. Another state we reviewed, California, analyzed worker’s compensation injury data for one of their hospitals from 2003 to 2013. According to state officials, 1,169 of the 4,449 injuries were due to patient assaults and amounted to $16.6 million in worker’s compensation costs over this time period. In another study, researchers surveyed nurses from a hospital system in the mid-Atlantic region regarding medical expenses related to work-related assaults against them. They found that of the 106 nurses who reported injuries, the collective costs of treatment and lost wages for the 30 nurses requiring treatment was $94,156. OSHA increased its inspections of health care employers for workplace violence from 11 in 2010 to 86 in 2014 (see fig. 5). OSHA officials attribute this increase to a rise in employee complaints and programmed inspections following implementation of a 3-year National Emphasis Program (NEP) targeting nursing and residential care facilities, which began in April 2012. Workplace violence was one of the hazards included as part of the NEP, which required each OSHA region to inspect a minimum number of facilities from a list developed by OSHA’s national office of those facilities meeting or exceeding certain injury and illness rates. OSHA conducted a total of 344 inspections involving workplace violence in the health care sector from 1991 through April 2015. More than two- thirds of the 344 inspections since 1991 were unprogrammed, and over 70 percent of the unprogrammed inspections were conducted in response to complaints (see fig. 6). Sixty percent (205 inspections) of the 344 inspections were conducted by 3 of OSHA’s 10 regions. OSHA officials said that the higher number of inspections in certain regions could have been due to them receiving a higher number of workplace violence complaints than other regions. OSHA officials also said that the higher number of inspections in certain regions could have been due to the regions having more experienced workplace violence coordinators and inspectors, which increased their comfort in pursuing workplace violence cases. In April 2015, OSHA announced the expiration of the nursing and residential care facilities NEP. However, OSHA determined that the results of the NEP indicated a need for continued focus on efforts to reduce the identified hazards in those sectors, including workplace violence. Consequently, in June of 2015, OSHA issued new inspection guidance stating that all programmed and unprogrammed inspections of in-patient health care facilities–including hospitals and nursing and residential care facilities–are to cover the hazards included in the recently concluded NEP. This new inspection guidance applies to a broader group of health care facilities by including hospitals, in addition to nursing and residential care facilities, which were covered by the NEP. Unlike the NEP, the guidance does not require OSHA area offices to inspect a minimum number of facilities each year. To determine whether workplace violence is a potential hazard in a facility, OSHA inspectors are directed in an OSHA enforcement directive to take certain steps during inspections, including a review of an employer’s workplace injury and illness logs, interviews with employees, and personal observations of potential workplace violence hazards. If there are potential hazards, inspectors are expected to physically inspect and identify any hazards that increase exposure to potential violence, such as lack of appropriate lighting or the absence of security systems. In addition, inspectors are instructed to interview all employees who have observed or experienced any violent acts and review other records, such as police and security reports and workers’ compensation records. In addition, inspectors are instructed to determine the violence prevention measures an employer has in place and whether it has provided any related training to its employees. If inspectors determine that a general duty clause or other citation is warranted, they will consult with their regional office management, OSHA’s national office, and the Department of Labor’s solicitor’s office to develop the citation, according to OSHA officials. OSHA has established various policies and procedures to support its inspectors in conducting workplace violence inspections, including the following: Uniform inspection procedures. OSHA issued an enforcement directive in 2011 to provide its inspectors with uniform procedures for addressing workplace violence. This directive defines workplace violence, describes the steps for conducting inspections, and outlines the criteria for a general duty clause citation along with descriptions of the types of evidence needed to support each criterion. The directive also requires OSHA regional and area offices to ensure that OSHA inspectors are trained in workplace violence prevention to assist them in understanding specific workplace violence incidents, identify hazard exposure, and assist the employer in abating the hazard. Regional workplace violence coordinators. Every regional office has a designated workplace violence coordinator who functions as an in-house expert on workplace violence and provides advice and consultation to inspection teams, according to OSHA officials. In addition, according to OSHA officials, the coordinators hold bi-monthly teleconferences with OSHA national office managers to exchange information and discuss strategies for developing workplace violence cases. Inspector training. According to OSHA officials, all inspectors are required to complete web-based training as part of their initial training that includes four lessons related to workplace violence: (1) defining workplace violence, (2) identifying solutions to the violence, (3) conducting workplace violence inspections, and (4) protecting oneself during an inspection. Three other optional webinars are offered: a 1.5- hour webinar on the 2011 workplace violence enforcement directive that includes discussion of its purpose, procedures for conducting inspections and issuing citations for workplace violence, and resources available for workplace violence inspections. The second is a 1.5-hour webinar that focuses on identifying risks for violence and prevention strategies in health care and social services settings. The third is a 2-hour webinar that includes information on how to conduct inspections as part of the NEP targeting nursing and residential care facilities. Out of 1,026 OSHA staff who were invited to take the optional webinars, OSHA reports 652 staff have completed the webinar on the 2011 directive, 1,023 have completed the one on identifying risks and prevention strategies, and 713 have completed the webinar on the NEP, as of June 2015. OSHA has developed and disseminated voluntary guidelines and a variety of other informational materials to help educate health care and other employers on preventing workplace violence. As previously discussed, in 2015 OSHA issued an update of its written guidelines for health care and social service employers on preventing and responding to workplace violence. The guidelines identify the components that should be incorporated in a workplace violence prevention program and include checklists for employers to use in evaluating those programs. OSHA has a workplace violence web page with links to the 2015 guidelines, other publications, and resources and materials for employee training related to workplace violence, along with links for obtaining consultation services from OSHA and for filing complaints. In addition, OSHA launched a new webpage in December of 2015 with resources that employers and workers can use to address workplace violence in health care facilities. For example, the webpage links to a new OSHA publication that presents examples of health care facilities’ practices related to the five components recommended in OSHA’s voluntary guidelines. OSHA also formed an alliance with the Joint Commission to provide employers with information, guidance, and access to training resources to protect their employees’ health and safety that includes addressing workplace violence. As part of this alliance, OSHA has disseminated information on preventing workplace violence in health care through publication of three articles in a Joint Commission newsletter, with a fourth article planned. OSHA officials told us they obtained feedback from stakeholders on the workplace violence prevention guidelines and incorporated stakeholder comments into the final publication of the 2015 guidelines. These stakeholders confirmed the usefulness of OSHA’s revised guidelines, according to OSHA officials. OSHA officials also told us the agency has not conducted and does not plan to conduct any type of formal evaluation of the usefulness of these materials due to insufficient resources. OSHA also funds training on workplace violence prevention for employers and workers. OSHA provided training grants in 2012 and 2013 totaling $254,000 to three organizations that developed workplace violence prevention curricula and trained 1,900 health care workers. Additional training grants totaling over $514,000 were awarded to five organizations in 2014 to be used for programs that include training health care workers and employers in preventing and addressing workplace violence. While the number of inspections involving workplace violence in health care facilities has increased, a relatively small percentage of these inspections resulted in general duty clause citations related to workplace violence. From 1991 through October 2014, OSHA issued 18 general duty clause citations to health care employers for failing to address workplace violence. Seventeen of these citations were issued from 2010 through 2014 (see fig. 7). These citations were issued in about 5 percent of the 344 workplace violence inspections of health care employers that were conducted from 1991 to April 2015. All 18 citations arose from unprogrammed inspections. Fourteen of the citations arose from complaints—the most common type of unprogrammed inspection among these cases. For example, in one case, OSHA cited an employer for exposing employees working in a residential habilitation home to the hazard of violent behavior and being physically assaulted by patients with known histories of violence or the potential for violence. OSHA determined that the company failed to identify and abate existing and developing hazards associated with workplace violence. In all 18 of these cases, health care workers had been injured or killed by patients, clients, or residents. We found that the three regions that conducted the highest number of workplace violence inspections also issued the majority of workplace violence-related general duty citations to health care employers. Collectively, the three regions issued 12 of the 18 general duty citations issued since 1991. Staff from all 10 OSHA regional offices said it was challenging to cite employers for violating the general duty clause when workplace violence is identified as a hazard and staff from 4 OSHA regional offices said it was challenging to develop these cases within the 6-month statutory time frame required to develop a citation. As described in OSHA’s enforcement directive, to cite an employer for violating the general duty clause for a workplace violence hazard, OSHA inspectors must demonstrate that (1) a serious workplace violence hazard exists and the employer failed to keep its workplace free of hazards to which employees were exposed, (2) the hazard is recognized by the employer or within the industry, (3) the hazard caused or is likely to cause death or serious physical harm, and (4) there are feasible abatement methods to address the hazard. Some inspectors and other regional officials from 5 OSHA regional offices said it is difficult to collect sufficient evidence to meet all four criteria during an inspection. For example, two regional officials noted that while injuries may have occurred as a result of workplace violence at facilities they have inspected, the assaults may involve a single employee or a very small number of employees, or the assaults may not be frequent or serious enough to demonstrate a hazard that can cause serious physical harm or death. Another inspector noted that an employer may have a minimal workplace violence prevention program and that it is sometimes difficult to prove that the employer has not done enough to address the hazard. Staff, including officials and inspectors, from 5 of OSHA’s 10 regional offices said it would be helpful to have additional assistance to implement the 2011 workplace violence enforcement directive. They suggested having additional information on how to collect evidence and write up a workplace violence citation, examples of workplace violence issues that have been cited, examples of previously documented workplace violence case files, and examples of citations that have been upheld in court would be helpful. According to federal internal control standards, agency management should share quality information throughout the agency to enable personnel to perform key roles in achieving agency objectives. While OSHA’s webinar on the 2011 workplace violence enforcement directive provides general guidance on the types of evidence needed to develop a general duty clause citation, it does not provide the types of detailed information proposed by staff. For example, officials from one region said that although the training they received was helpful, assessing workplace violence hazards is new to many inspectors, and additional information would help inspectors fully understand how to inspect, collect evidence, and write up a workplace violence citation. Inspectors from another region suggested the national office provide an updated webinar with lessons learned and examples of what has been cited so inspectors can be consistent in how they develop these cases. Officials from OSHA’s national office told us they have considered developing additional training for inspectors on conducting workplace violence inspections and are planning to revise the 2011 enforcement directive. For example, they said that they would like to provide inspectors more specific guidance on developing a workplace violence case in different environments and additional information about the hazards and abatement measures applicable to different health care facilities. OSHA officials said the training would be developed and the directive would be revised by the end of 2016. Without this additional information, inspectors may continue to face challenges in conducting workplace violence inspections and developing citations. When inspectors identify workplace violence hazards during an inspection, but all the criteria for issuing a general duty clause citation are not met and a specific standard does not apply, inspectors have the option of issuing warning letters to employers, known as Hazard Alert Letters (HAL). These letters recommend that the employer voluntarily take steps to eliminate or reduce workers’ exposure to the hazard. The letters describe the specific hazardous conditions identified in an inspection, list corrective actions that can be taken to address them, and provide contact information to seek advice and consultation on addressing the hazards. From 2012 through May 2015, OSHA issued 48 HALs to health care employers recommending actions to address factors contributing to workplace violence. Several of the HALs we reviewed stated that workers had been assaulted, notified the employers that they failed to implement adequate measures to protect their workers from assaults, and recommended the employers take specific steps to better protect their workers. Agency officials informed us OSHA inspectors are not required to routinely conduct follow-up inspections after issuing HALs, and the uniform inspection procedures from the 2011 enforcement directive do not specify a process for contacting employers to determine whether hazards and deficiencies have been addressed. They explained, however, that a follow-up inspection would not normally be conducted if the employer or employer representative provides evidence that the hazard has been addressed. According to OSHA officials, if OSHA decides to conduct a follow-up inspection, OSHA’s recommended time period for a follow-up with employers is 12 months following employer receipt of the HAL, although this is not required in the inspection procedures from the 2011 enforcement directive. OSHA established a policy in 2007 to follow up on HALs related to ergonomics issues, but this policy does not apply to HALs related to workplace violence issues. OSHA established the ergonomics HAL policy after its ergonomics standard was invalidated under the Congressional Review Act in 2001. The ergonomics HAL follow-up policy outlines a process for contacting employers to determine whether ergonomic hazards and deficiencies identified in the letters have been addressed. OSHA inspectors are directed to schedule a follow-up inspection to determine if the hazards are being addressed if the employer does not respond or responds inadequately. In addition, OSHA was not able to tell us how many of the 48 health care employers who received HALs for workplace violence issues had follow- up inspections because the follow-up status of HALs is not centrally maintained. Each regional office workplace violence coordinator would have to be contacted to find out the status of each HAL. OSHA has a centralized information system, but has not systematically used it for tracking the status of HALs. While OSHA’s information system is capable of tracking the status of HALs, OSHA officials are not sure if regional offices are consistently entering updated information. According to federal internal control standards, agency management should perform ongoing monitoring as part of the normal course of operations. Without a uniform process to follow up on these HALs, OSHA will not know whether the hazards that placed employees at risk for workplace violence at these facilities continue to exist. In addition, without routine follow up on these cases, OSHA may not obtain the information needed to determine whether a follow-up inspection or other enforcement actions are needed. OSHA officials acknowledged that it can be challenging to develop a general duty clause citation for workplace violence and cited some potential benefits of having a workplace violence prevention standard. However, officials stated that OSHA is not planning at this time to develop a workplace violence prevention standard because it has identified other workplace hazards that are higher priorities for regulatory action. According to OSHA officials, the potential benefits of having a specific standard include setting clearer expectations for employers, increasing employer implementation of workplace violence prevention programs, and simplifying the process for determining when citations could be issued. Rather than pursuing a standard on workplace violence, the officials stated that OSHA has focused its efforts on increased enforcement using the general duty clause, issuing new guidance, and developing a new webpage for employers and workers with resources for addressing workplace violence in health care facilities. OSHA officials also highlighted other efforts the agency has taken to reduce workplace violence in health care facilities. These efforts included obtaining feedback from stakeholders on the employer guidelines, establishing a task force to develop a long term agency plan for workplace violence prevention and resources for OSHA staff and the public, and issuing publications on workplace violence prevention strategies. In addition, OSHA officials reported conducting a qualitative and quantitative review of data from its NEP for Nursing and Residential Care Facilities. However, OSHA’s review of the NEP entailed summarizing data collected from the regions 6 months after the program began on inspections that resulted in the issuance of ergonomics hazard alert letters. OSHA officials said they did not complete an overall evaluation of the program even though the NEP procedures provided that the agency do so. The NEP procedures stated that the national office was to collect data relevant to the effectiveness of the program from the regions and complete an evaluation. Additionally, the procedures specified that the evaluation should address the program’s role in meeting OSHA’s goals, such as the reduction in the number of injuries and abatement measures implemented. An OSHA official we spoke with could not provide a reason why OSHA did not conduct an evaluation of the NEP and was not aware of any plans for the agency to conduct such an evaluation. According to information provided by agency officials, they have not assessed how well OSHA’s approach to helping prevent workplace violence is working. According to federal internal control standards, agency management should assess the quality of agency performance over time and correct identified deficiencies. Such assessments involve analyzing data to determine whether the intended outcomes were achieved and identifying any changes that may improve results. Because OSHA has not assessed the results of its education and enforcement efforts, it is not in a position to know whether they have helped, for example, to increase employer awareness and implementation of abatement measures. Assessing how well OSHA’s approach is working could inform future efforts to address workplace violence in health care facilities. For example, completing the evaluation of the NEP results could provide OSHA with information to decide whether further action may be needed to address workplace violence hazards. OSHA could also consider cost-effective ways to conduct such assessments, such as reviewing a sample of workplace violence inspections that resulted in hazard alert letters to determine the extent to which employers implemented recommended abatement measures. All of the nine states we reviewed have enacted laws that require health care employers to establish a plan or program to protect workers from workplace violence. According to our review of information provided by state officials, these states have requirements, either in law or regulation, similar to the components of an effective workplace violence prevention program identified in OSHA’s voluntary guidelines (see table 4). Specifically, seven of the nine states require management and worker participation in workplace violence prevention efforts, such as through a committee or other means. Eight of the nine states require health care employers to analyze or assess worksites to identify hazards that may lead to violent incidents. All nine states require health care employers to take steps to prevent or control the hazards, such as changing policies, security features, or the physical layout of the facility. Eight of the nine states also require health care employers to train workers on workplace violence prevention, such as how workers can protect themselves and report incidents. All nine states require health care employers to record incidents of violence against workers, and eight of the states require health care employers to periodically evaluate or review their workplace violence prevention plan or program. According to state officials in the nine states we reviewed, the department of labor is responsible for ensuring compliance with these workplace violence prevention requirements, although in some states the department of health also has oversight responsibilities. In addition, under their OSHA-approved state plans, the state departments of labor in our selected states may issue citations to employers under their jurisdiction for violations of an applicable state standard or the state’s equivalent to the general duty clause. Similar to OSHA, state agency oversight activities included investigating complaints and reports of violent incidents, as well as conducting planned inspections. The departments of labor in the states we reviewed conducted varying numbers of inspections of health care employers involving workplace violence issues and in some cases cited employers for violations of their requirements. From 2010 through 2014, state officials from eight of the nine states reported conducting from 2 to 75 inspections of health care employers related to workplace violence. One state did not conduct inspections of health care employers regarding workplace violence. The completed inspections resulted in 0 to 74 reported citations. In addition to their workplace violence prevention laws, officials in some of the states we reviewed described other efforts to further address workplace violence against health care workers. For example, California, New York, and Oregon have a NIOSH-funded program for tracking and investigating work-related fatalities called the Fatality Assessment and Control Evaluation Program. The purpose of this program is to identify risk factors for work-related fatalities and disseminate prevention recommendations. Also, the state of Washington has an independent research program called the Safety and Health Assessment and Research for Prevention Program that conducts research projects on occupational health and safety. In addition, California department of labor officials stated that they are developing a workplace violence prevention standard that will be adopted by July 2016, which officials said would make it easier for inspectors to cite employers for workplace violence issues. Relatively few studies have been conducted on the effectiveness of workplace violence prevention programs, limiting what is known about the extent to which such programs or their components reduce workplace violence. After conducting a literature review, we identified five studies that evaluated the effectiveness of workplace violence prevention programs and met our criteria, such as having original data collection and quantitative evidence. Four of the five studies we reviewed suggest that workplace violence prevention programs can contribute to reduced rates of assault. Three Studies of the Veterans Health Administration system. In one study, researchers surveyed workers from 142 Department of Veterans Affairs (VA) hospitals in 2002 and identified facility-level characteristics associated with higher and lower rates of assaults. The researchers found that facility-wide implementation of alternate dispute resolution training was associated with reduced assault rates. In a separate study of the VA system, researchers examined the relationship between the implementation of a comprehensive workplace violence prevention program at 138 VA health care facilities and changes in assault rates from 2004 through 2009. The workplace violence prevention program included training, workplace practices, environmental controls, and security. The researchers found that facilities that fully implemented a number of training practices experienced a modest decline in assault rates. The training practices included assessing staff needs for training, having trainers present in the facility and actively training, and providing staff training on prevention and management of disruptive behavior and reporting disruptive behavior, among other things. In a third study, researchers described the processes that VA’s Veterans Health Administration (VHA) uses to evaluate and manage the risk of assaultive patients. The study stated that VHA’s approach included the use of committees made up of various stakeholders to assess threatening patients, and recommendations flagged in veterans’ electronic medical records to notify staff of individuals who may pose a threat to the safety of others. Researchers surveyed Chiefs of Staff at 140 VHA hospitals and found that committee processes and perceptions of effectiveness were associated with a reduction in assault rates. For example, facilities that rated their committees as “very effective” were the only facilities that experienced a significant decrease in assault rates from 2009 to 2010. Emergency departments study. In a fourth study, researchers found mixed results regarding the effect that a workplace violence prevention program had on the rate of assaults. The study was conducted with three emergency departments that implemented the program (intervention sites) and three emergency departments that did not implement the program (comparison sites). Implementation of the program took place in 2010 and included environmental changes, changes in policies and procedures, and staff training. Researchers measured assault rates in the intervention and comparison sites before and after the workplace violence program was implemented by surveying on a monthly basis over an 18-month period 209 health care workers who volunteered to participate in the study. The researchers found that workers at the intervention sites and the comparison sites reported significantly fewer assaults over the study period. Therefore, the researchers could not conclude that workers at the intervention sites experienced a significantly greater decrease in violence compared with workers at the comparison sites. However, at the facility level, the researchers found that two of the intervention sites experienced a significant decrease in violence, and no individual comparison site had any significant change in assaults. In-patient mental health facilities study. A fifth study we reviewed found that implementation of a workplace violence prevention program improved staff perceptions of the safety climate in the facility but did not result in an overall change in assault rates. This study evaluated a comprehensive workplace violence prevention program that New York implemented in three state-run, in-patient mental health facilities from 2000 through 2004. The study compared these facilities that implemented the program (intervention sites) with three state-run, in-patient mental health facilities that did not implement the program (comparison sites). Researchers surveyed 319 staff at the intervention sites and found that staff perceptions of management’s commitment to violence prevention and employee involvement in the program was significantly improved after the program was implemented. However, an analysis of the change in staff-reported physical assaults did not indicate a statistically significant reduction in assaults at the facility level in either the intervention or comparison sites. Research also suggests that workplace violence prevention legislation may increase employer adoption of workplace violence prevention programs. Two studies compared the workplace violence prevention programs reported by hospitals and psychiatric facilities in California— which enacted a workplace violence prevention law for hospitals in 1993—to facilities in New Jersey, where a similar law did not exist at the time of the study, according to the authors. Information was collected through interviews; facility walk-throughs; and a review of written policies, procedures, and training material. In the first study, researchers compared 116 California hospital emergency departments to 50 New Jersey hospital emergency departments and found that a significantly higher percentage of the California hospitals had written policies and procedures on workplace violence prevention compared to hospitals in New Jersey. In the second study, researchers compared 53 psychiatric units and facilities in California to 30 psychiatric units and facilities in New Jersey and found a higher percentage of California facilities that participated in the study had written workplace violence prevention policies compared to facilities in New Jersey. While New Jersey had a smaller percentage of facilities with written workplace violence prevention policies compared to California, the study found that New Jersey had a higher proportion of facilities (17 of 30 or 71 percent) than in California (25 of 53 or 61 percent) with workplace violence policies that address violence against personnel, patients, and visitors. In a third study, researchers found that rates of assault against employees in selected California hospital emergency departments decreased after enactment of the California law (from 1996 to 2001), whereas the assault rates in selected New Jersey hospital emergency departments increased over this same time period. However, the researchers could not conclude that these differences were attributable to the California law. Compared to workers overall, health care workers face an increased risk of being assaulted at work, often by the patients in their care. Given the high rate of violence committed against health care workers, particularly in in-patient facilities, there is an increasing need to help ensure that health care workers are safe as they perform their work duties. OSHA may issue general duty clause citations to employers who fail to protect their workers from hazardous conditions. While OSHA has increased the number of inspections of workplace violence in health care facilities in recent years, relatively few general duty clause citations resulted from these inspections. Inspectors reported facing challenges in developing the evidence needed to issue these citations, and officials and inspectors from 5 of OSHA’s 10 regions said it would be helpful to have additional information to assist them in implementing the 2011 enforcement directive. Without this additional information, inspectors may continue to experience difficulties in addressing challenges they reported facing in developing these citations. When inspectors do not have enough evidence to issue a general duty clause citation, OSHA inspectors can issue nonbinding hazard alert letters warning employers of a serious safety concern. However, without a policy requiring inspectors to follow-up on hazard alert letters, OSHA will not know whether employers have taken steps to address the safety hazards identified in these letters or whether a follow up inspection is needed. If the situations identified in the letters are left unchecked, health care workers may continue to be exposed to unsafe working conditions that could place them at an increased risk of workplace violence. OSHA has increased its education and enforcement efforts in recent years to raise awareness of the hazard of workplace violence and to help employers make changes that could reduce the risk of violence at their worksites. However, OSHA has done little to assess the results of its efforts. Without assessing the results of these efforts, OSHA is not in a position to know whether the efforts are effective or if additional action, such as development of a specific workplace violence prevention standard, may be needed. To help reduce the risk of violence against health care workers, we recommend that the Secretary of Labor direct the Assistant Secretary for Occupational Safety and Health to take the following actions: Provide additional information to assist inspectors in developing general duty clause citations in cases involving workplace violence. Establish a policy that outlines a process for following up on health care workplace violence-related hazard alert letters. To help determine whether current efforts are effective or if additional action may be needed, such as development of a workplace violence prevention standard for health care employers, the Secretary of Labor should direct the Assistant Secretary for Occupational Safety and Health to: Develop and implement cost-effective ways to assess the results of the agency’s efforts to address workplace violence. We provided a draft of this report to the Departments of Labor (DOL), Health and Human Services (HHS), Justice (DOJ), and Veterans Affairs (VA) for review and comment. We received formal written comments from the DOL and VA, which are reproduced in appendices III and IV. In addition, DOL’s Bureau of Labor Statistics, HHS, and DOJ provided technical comments, which we incorporated as appropriate. In its written comments, DOL’s Occupational Safety and Health Administration (OSHA) said it agreed with all three of our recommendations. With regard to our first recommendation, OSHA stated that the agency is in the process of revising its enforcement directive and developing a training course to further assist inspectors. With regard to our second recommendation, OSHA stated that the agency plans to include standardized procedures for following up on hazard alert letters in its revised enforcement directive. With regard to our third recommendation, OSHA stated that it intends to find a cost effective way to gauge its enforcement efforts to determine whether additional measures, such as developing a workplace violence prevention standard for health care workers, is necessary. In addition, OSHA stated that the agency is reviewing past inspections that resulted in citations or hazard alert letters to evaluate how these cases were developed and what measures may improve the process. In its written comments, VA said it agreed with our findings and three recommendations to OSHA, but suggested the recommendations could be more specific regarding the tools and processes necessary to support OSHA inspectors. For example, VA suggested that OSHA should develop measurable and performance based criteria for workplace violence prevention programs in the unique health care environment. We believe that our recommendations appropriately address our findings. VA also stated that our report did not fully describe the specific processes that the Veterans Health Administration uses to protect employees and patients from dangerous patient behaviors and provided a reference to a study about these processes. In response, we reviewed the study and incorporated its findings in the section of our report on research on the effectiveness of workplace violence prevention programs. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Labor, Health and Human Services, Justice, and Veterans Affairs, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Please contact me on (202) 512-7215 or at sherrilla@gao.gov if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. This report examines: (1) what is known about the degree to which workplace violence occurs in health care facilities and its associated costs, (2) steps OSHA has taken to protect health care workers from workplace violence and assess the usefulness of its efforts, (3) how selected states have addressed workplace violence in health care facilities, and (4) research on the effectiveness of workplace violence prevention programs in health care facilities. For the purposes of this report, we focused on workplace violence against health care workers. We used the National Institute for Occupational Safety and Health’s (NIOSH) definition of workplace violence, which is “violent acts (including physical assaults and threats of assaults) directed toward persons at work or on duty.” We did not focus on other types of violence, such as self-inflicted violence, bullying, or incivility among health care workers. To address these objectives, we: analyzed federal data used by three federal agencies to estimate workplace violence-related injuries and deaths in health care facilities; reviewed related studies identified in a literature review; interviewed federal officials, analyzed enforcement data, and reviewed relevant federal laws, regulations, inspection procedures, and guidelines; reviewed selected state workplace violence prevention laws from nine selected states and visited four of the states where we interviewed state officials, health care employers, and workers; and interviewed researchers and others knowledgeable about workplace violence prevention in health care facilities. We conducted this performance audit from August 2014 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To identify what is known about the degree to which workplace violence occurs in health care facilities and its associated costs, we reviewed federal data sources used by three federal agencies to estimate workplace violence-related injuries and deaths. The four national datasets we analyzed collect data on different types of workplace violence incidents from different sources (see table 5). The years of data we analyzed varied by data source, depending on the availability of data and the number of cases needed to develop national estimates, but the dates generally were from 2009 through 2013. We reported the estimated rates of nonfatal workplace violence against workers in health care facilities compared to workers overall (all industries combined) for each relevant data source. The rates of nonfatal workplace violence were calculated so that the base (denominator) was the same across all three data sources (the rate per 10,000 workers). We also reported information related to the health care occupations with high nonfatal workplace violence-related injury rates, the type of violence, and the perpetrator of the violence. For consistency purposes, we used 2011 as the common year of data from the three datasets with nonfatal injury data–BLS’s SOII, NIOSH’s NEISS-Work, and BJS’s NCVS–to report the number of nonfatal workplace violence cases in health care settings recorded in each source. The number of cases and rates of nonfatal workplace violence-related injury we report includes violence perpetrated against health care workers by other people. For the BLS SOII data, we reported cases where the workplace violence was caused by another person–intentional, unintentional, or unknown–and excluded cases where the violence was self-inflicted or caused by animals or insects. We focused on the health care industry and reported the BLS data for the three health care industry categories BLS uses: ambulatory health care services, hospitals, and nursing and residential care facilities. The estimated rates and number of workplace violence cases we report from the NCVS represent a subset of the workplace violence cases BJS typically reports. BJS defines assaults as both simple and aggravated, including threats. In addition, BJS defines violence to include all types of physical harm, including sexual assault, robbery, and aggravated and simple assault. We reported assaults, including rape and sexual assault, aggravated assault, and simple assault. We focused on actual assaults because these types of cases are more comparable to the cases we reported from the other federal data sources. We did not include cases of verbal threats of assault or robberies. Health care workers included survey respondents who described their job as working in the medical profession or mental health services field. We did not report data on the costs of workplace violence or the perpetrators of the violence from BJS’s NCVS because of data limitations. The survey asks individuals about the medical expenses they incurred as a result of workplace violence, but our analysis of the data identified 22 cases from 2009 through 2013 where dollar amounts were reported, which was too few cases to produce a national estimate. We decided not to report the perpetrator information from the survey data because BJS officials said that due to a limitation of the survey, it underestimates the number of workplace violence cases in which patients assault workers. Specifically, the variables that describe the relationship of the victim to the perpetrator in the survey are dependent on whether the victim knows the perpetrator. Survey respondents who answer that the perpetrator is a stranger are not subsequently asked if the perpetrator was a patient. Therefore, it is possible that many perpetrators who are patients are coded as strangers. To assess the reliability of the federal data, we reviewed relevant agency documentation, conducted electronic data testing, compared our results to related information reported by the federal agencies, and interviewed agency officials. Based on these reviews, we determined that the data were sufficiently reliable for the purposes of providing information about the number of cases and rates of workplace violence in the health care industry. All national estimates produced from our analysis of the federal data are subject to sampling errors. We express our confidence in the precision of our results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples the respective agency could have drawn. For estimates derived from BLS’s SOII data, we used the agency-provided relative standard errors to estimate the associated confidence intervals. For estimates derived from the NIOSH NEISS-Work supplement, we used the multi-stage cluster sample variance estimation methodology detailed in the agency technical documentation to estimate the associated confidence intervals. For estimates derived from NCVS data, BJS provided us with generalized variance function parameters for the 5 calendar years’ worth of survey data, both individually and for all 5 calendar years combined. We used these parameters with formulas for deriving the sampling error of estimated totals and estimated ratios available in the NCVS technical documentation to estimate the associated confidence intervals. The tables below provide the estimates and 95 percent confidence intervals for the data we present in the body of this report. We conducted a literature review to identify research related to the prevalence of workplace violence and associated costs (objective 1) and the effectiveness of workplace violence prevention programs (objective 4). We searched relevant platforms, such as ProQuest Research Library and Social Services Abstracts, to identify studies published in government reports and peer-reviewed journals from January 2004 to June 2015. We also consulted with federal officials and researchers we interviewed to identify related research. See appendix VIII for a bibliography of the studies cited in this report. We screened more than 170 articles and focused our review on U.S. studies identified among this group that met the following additional criteria. First, they were studies based on original data collection rather than reviews of existing literature. Second, they provided quantitative evidence directly related to our research objectives. Lastly, they provided information related to physical violence against health care workers. For example, we eliminated studies that focused solely on verbal abuse, such as bullying or incivility among health care workers. We conducted detailed reviews of the 32 studies that met these initial screening criteria. Our reviews entailed an assessment of each study’s research methodology, including its data quality, research design, and analytic techniques, as well as a summary of each study’s major findings and conclusions. We also assessed the extent to which each study’s data and methods support its findings and conclusions. We eliminated studies that were not sufficiently reliable and methodologically rigorous for inclusion in our review. For example, we eliminated studies with low survey response rates and studies whose findings were based on information collected from a small number of health care workers. We assessed the methodological sufficiency of each study using internal guidance documents. We determined that 17 of the studies were sufficiently reliable and methodologically rigorous for inclusion in our review. To examine the steps OSHA has taken to protect health care workers from workplace violence, we reviewed relevant federal laws and regulations; analyzed OSHA’s guidance, inspection procedures, and enforcement data; and interviewed OSHA officials. We also collected information from all 10 OSHA regional offices on inspector training and how inspectors investigate workplace violence during inspections of health care employers. We analyzed enforcement data from two OSHA databases: the Integrated Management Information System (IMIS) database and the Occupational Safety and Health Information System (OIS) database, which replaced the IMIS system. We analyzed enforcement data from 1991 through April 2015 on federal OSHA inspections, including data on the type of inspection, inspection findings, citations, and penalties. To assess the reliability of the OSHA enforcement data, we reviewed relevant agency documentation, conducted electronic data testing, and interviewed agency officials. Based on these reviews, we determined that the data were sufficiently reliable for our purposes. To examine how selected states have addressed workplace violence in health care settings, we analyzed selected state laws and other information collected from state officials in nine states: California, Connecticut, Illinois, Maine, Maryland, New Jersey, New York, Oregon, and Washington. We focused our review on these nine states because they were the ones we identified from our search of legal databases; related studies; and interviews with federal officials, researchers, and national labor organizations. We did not conduct a nationwide review of state laws or collect information from all 50 states; therefore, other states may have these types of requirements. For the nine states we identified, we reviewed information provided by state officials on state requirements, including laws and regulations, for workplace violence prevention programs in health care settings. We confirmed our descriptions of the selected state requirements with state officials as of December 2015. We did not evaluate the quality or effectiveness of state requirements. We visited four of these states–California, Maryland, New York, and Washington–selected for variation in the length of time their state workplace violence prevention laws have been in place. During our visits, we interviewed state officials from the state’s department of labor and department of health, visited one health care facility in each state, and held discussion groups with health care workers. We visited four health care facilities, including two state psychiatric hospitals, a nursing home, and a hospital with an emergency department. We selected these types of facilities because BLS data indicate that most workplace violence incidents occur in hospitals and nursing and residential care facilities. During each of the health care facility visits, we met with security, management, and health care workers. We also participated in a guided tour of the facility. The information we obtained from the states and our site visits is not generalizable. We conducted five nongeneralizable discussion groups with health care workers to learn about their experience with workplace violence. These discussion groups were organized by labor organization officials that represent health care workers. The discussions occurred in Baltimore, Maryland; Los Angeles, California; New York, New York; Seattle, Washington; and Washington, D.C. These locations were selected to align with our selected site visit states. The labor organization officials invited health care workers who had been verbally and/or physically assaulted while performing their duties at work. A total of 54 health care workers participated in the discussion groups. The participants worked in various health care practice areas, including home health, acute care, mental health, and residential care. We asked the health care workers about their experience with workplace violence, whether they received workplace violence prevention training, the factors they consider when deciding whether to report an incident to their employer, the factors that contribute to workplace violence, and what could be done to reduce these incidents. We used their responses to identify themes and illustrative examples. Methodologically, discussion groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the reasons for the discussion group participants’ attitudes on specific topics and to offer insights into their experiences. Because of these limitations, we did not rely entirely on the information collected from the discussion groups, but rather used several different methodologies to corroborate and support our findings. In addition to the contact named above, Mary Crenshaw (Assistant Director), Cathy Roark (Analyst-in-Charge), Hiwotte Amare, Carl Barden, James Bennett, Rachael Chamberlin, David Chrisinger, Sarah Cornetto, Lauren Gilbertson, LaToya King, Linda Kohn, Joel Marus, Ashley McCall, Jean McSween, Kathy Leslie, Terry Richardson, Stacy Spence, Walter Vance, and Kate van Gelder made significant contributions to this report. Arnetz, J. E., L. Hamblin, J. Ager, M. Luborsky, M.J. Upfal, J. Russell, and L. Essenmacher. “Underreporting of Workplace Violence: Comparison of Self-Report and Actual Documentation of Hospital Incidents,” Workplace Health & Safety, vol. 63, no. 5 (2015): 200-210. Arnetz, J. E., L. Hamblin, L. Essenmacher, M. J. Upfal, J. Ager, and M. Luborsky. “Understanding patient-to-worker violence in hospitals: A qualitative analysis of documented incident reports,” Journal of Advanced Nursing, vol. 71, no. 2 (2015): 338-348. Bride, B. E., Y.J. Choi, I.W. Olin, and P.M. Roman. “Patient Violence Towards Counselors in Substance Use Disorder Treatment Programs: Prevalence, Predictors, and Responses,” Journal of Substance Abuse Treatment (2015). Campbell, J.C., J.T. Messing, J. Kub, J. Agnew, S. Fitzgerald, B. Fowler, D. Sheridan, C. Lindauer, J. Deaton, and R. Bolyard. “Workplace Violence: Prevalence and Risk Factors in the Safe at Work Study,” Journal of Occupational and Environmental Medicine, vol. 53, no. 1 (2011): 82-89. Casteel, C., C. Peek-Asa, M. Nocera, J.B. Smith, J. Blando, S. Goldmacher, E. O’Hagan, D. Valiante, and R. Harrison. “Hospital Employee Assault Rates Before and After Enactment of the California Hospital Safety and Security Act,” Annals of Epidemiology, vol. 19, no. 2 (2009): 125-133. Gillespie, G. L., D.M. Gates, T. Kowalenko, S. Bresler, and P. Succop. “Implementation of a Comprehensive Intervention to Reduce Physical Assaults and Threats in the Emergency Department,” Journal of Emergency Nursing, vol. 40, no. 6 (2014): 586-591. Hanson, G. C., N.A. Perrin, H. Moss, N. Laharnar, and N. Glass. “Workplace violence against homecare workers and its relationship with workers health outcomes: a cross-sectional study,” BMC Public Health, vol. 15, no. 11 (2015): 1-13. Hodgson, M. J., R. Reed, T. Craig, F. Murphy, L. Lehmann, L. Belton, and N. Warren. “Violence in Healthcare Facilities: Lessons from the Veterans Health Administration,” Journal of Occupational and Environmental Medicine, vol. 46, no. 11 (2004): 1158-1165. Hodgson, M. J., D.C. Mohr, D.J. Drummond, M. Bell, and L. Van Male. “Managing Disruptive Patients in Health Care: Necessary Solutions to a Difficult Problem,” American Journal of Industrial Medicine, vol. 55 (2012): 1009-1017. Kelly, E. L., A.M. Subica, A. Fulginiti, J.S. Brekke, and R.W. Novaco. “A cross-sectional survey of factors related to inpatient assault of staff in a forensic psychiatric hospital,” Journal of Advanced Nursing, vol. 71, no. 5, (2015): 1110-1122. Kowalenko, T., D. Gates, G.L. Gillespie, P. Succop, and T.K. Mentzel. “Prospective study of violence against ED workers,” American Journal of Emergency Medicine, vol. 31 (2013): 197-205. Lipscomb, J., K. McPhaul, J. Rosen, J. Geiger Brown, M. Choi, K. Soeken, V. Vignola, D. Wagoner, J. Foley, and P. Porter. “Violence Prevention in the Mental Health Setting: The New York State Experience,” Canadian Journal of Nursing Research, vol. 38, no. 4 (2006): 96-117. Mohr, D. C., N. Warren, M.J. Hodgson, and D.J. Drummond. “Assault Rates and Implementation of a Workplace Violence Prevention Program in the Veterans Health Care Administration,” Journal of Occupational and Environmental Medicine, vol. 53, no. 5, (2011): 511-516. Peek-Asa, C., C. Casteel, V. Allareddy, M. Nocera, S. Goldmacher, E. O’Hagan, J. Blando, D. Valiante, M. Gillen, and R. Harrison. “Workplace Violence Prevention Programs in Hospital Emergency Departments,” Journal of Occupational and Environmental Medicine, vol. 49, no. 7 (2007): 756-763. Peek-Asa, C. C. Casteel, V. Allareddy, M. Nocera, S. Goldmacher, E. O’Hagan, J. Blando, D. Valiante, M. Gillen, and R. Harrison. “Workplace Violence Prevention Programs in Psychiatric Units and Facilities,” Archives of Psychiatric Nursing, vol. 23, no. 2 (2009): 166-176. Pompeii, L. A., A.L. Schoenfisch, H.J. Lipscomb, J.M. Dement, C.D. Smith, and M. Upadhyaya. “Physical Assault, Physical Threat, and Verbal Abuse Perpetrated Against Hospital Workers by Patients or Visitors in Six U.S. Hospitals,” American Journal of Industrial Medicine (2015): 1-11. Speroni, K.G., T. Fitch, E. Dawson, L. Dugan, and M. Atherton. “Incidence and Cost of Nurse Workplace Violence Perpetrated by Hospital Patients or Patient Visitors,” Journal of Emergency Nursing, vol. 40, no. 3 (2014): 218-228.
Workplace violence is a serious concern for the approximately 15 million health care workers in the United States. OSHA is the federal agency responsible for protecting the safety and health of the nation's workers, although states may assume responsibility under an OSHA-approved plan. OSHA does not require employers to implement workplace violence prevention programs, but it provides voluntary guidelines and may cite employers for failing to provide a workplace free from recognized serious hazards. GAO was asked to review efforts by OSHA to address workplace violence in health care. GAO examined the degree to which workplace violence occurs in health care facilities and OSHA's efforts to address such violence. GAO analyzed federal data on workplace violence incidents, reviewed information from the nine states GAO identified with workplace violence prevention requirements for health care employers, conducted a literature review, and interviewed OSHA and state officials. According to data from three federal datasets GAO reviewed, workers in health care facilities experience substantially higher estimated rates of nonfatal injury due to workplace violence compared to workers overall. However, the full extent of the problem and its associated costs are unknown. For example, in 2013, the most recent year that data were available, private-sector health care workers in in-patient facilities, such as hospitals, experienced workplace violence-related injuries requiring days off from work at an estimated rate at least five times higher than the rate for private-sector workers overall, according to data from the Department of Labor (DOL). The number of nonfatal workplace violence cases in health care facilities ranged from an estimated 22,250 to 80,710 cases for 2011, the most recent year that data were available from all three federal datasets that GAO reviewed. The most common types of reported assaults were hitting, kicking, and beating. The full extent of the problem and associated costs is unknown, however, because according to related studies GAO reviewed, health care workers may not always report such incidents, and there is limited research on the issue, among other reasons. DOL's Occupational Safety and Health Administration (OSHA) increased its education and enforcement efforts to help employers address workplace violence in health care facilities, but GAO identified three areas for improvement in accordance with federal internal control standards. Provide inspectors additional information on developing citations . OSHA has not issued a standard that requires employers to implement workplace violence prevention programs, but the agency issued voluntary guidelines and may cite employers for hazards identified during inspections—including violence in health care facilities—under the general duty clause of the Occupational Safety and Health Act of 1970. OSHA increased its yearly workplace violence inspections of health care employers from 11 in 2010 to 86 in 2014. OSHA issued general duty clause citations in about 5 percent of workplace violence inspections of health care employers. However, OSHA regional office staff said developing support to address the criteria for these citations is challenging and staff from 5 of OSHA's 10 regions said additional information, such as specific examples of issues that have been cited, is needed. Without such additional information, inspectors may continue to experience difficulties in addressing the challenges they reported facing. Follow up on hazard alert letters . When the criteria for a citation are not met, inspectors may issue warnings, known as hazard alert letters. However, employers are not required to take corrective action in response to them, and OSHA does not require inspectors to follow up to see if employers have taken corrective actions. As a result, OSHA does not know whether identified hazards have been addressed and hazards may persist. Assess the results of its efforts to determine whether additional action, such as development of a standard, may be needed . OSHA has not fully assessed the results of its efforts to address workplace violence in health care facilities. Without assessing these results, OSHA will not be in a position to know whether its efforts are effective or if additional action may be needed to address this hazard. GAO recommends that OSHA provide additional information to assist inspectors in developing citations, develop a policy for following up on hazard alert letters concerning workplace violence hazards in health care facilities, and assess its current efforts. OSHA agreed with GAO's recommendations and stated that it would take action to address them.
TSA has various processes for receiving and addressing air passenger complaints about the screening systems, procedures, and personnel at airport security checkpoints. Specifically, several TSA headquarters units and local TSA airport staff have responsibility and processes for receiving and addressing these complaints, and, if necessary, referring these complaints to other TSA offices for resolution. Figure 1 depicts the four primary TSA headquarters units and the local TSA airport staff who are responsible for receiving and addressing air passenger screening complaints. As highlighted in figure 1, the TSA Contact Center (TCC) receives, documents, and helps resolve screening complaints that air passengers make by telephone and e-mail. The TCC is TSA’s primary point of contact for collecting, documenting, and responding to public questions, concerns, or complaints regarding TSA security policies, procedures, and programs; reports and claims of lost, stolen, or damaged items; and employment issues. These officials include Lead and Supervisory Transportation Security Officers as well as Transportation Security Managers. At SPP airports, they include supervisory contractor officials. FSDs and Assistant Federal Security Directors are responsible for overseeing TSA security programs at all commercial airports. According to the Assistant Administrator of TSA’s Office of Civil Rights & Liberties, Ombudsman and Traveler Engagement, the agency is working on an initiative under which an unspecified number of TSA staff will receive training as “passenger advocates” and begin working in this capacity to address air passenger complaints at security checkpoints by January 2013. screener staff at other commercial airports. Customer Support Managers work in conjunction with other FSD staff to resolve customer complaints and communicate the status and resolution of complaints to air passengers. They are also responsible for ensuring security procedures and practices are consistently and effectively communicated to air passengers, to the extent permitted by law and regulation. TSA has an operations directive that specifies roles, responsibilities, and time frames for resolving and responding to screening complaints that air passengers submit to the TCC and FSD staff. This directive does not apply to complaints received through other mechanisms, as we discuss later in this report. The agency has also given TSA headquarters units and FSDs discretion in addressing these complaints at airports under their jurisdiction, according to TSA officials. This operations directive provides instructions for processing public inquiries, including air passenger screening complaints, received by the TCC and FSD staff. The directive indicates that inquiries received by the TCC will be answered by the TCC or will be forwarded to the appropriate FSD staff for response, and that inquiries received by FSD staff will be answered by FSD staff or will be forwarded to the TCC for response. In addition, the operations directive provides several time frames for responding to complaints. For example, TSA should respond within 48 hours for e-mail inquiries addressed by the TCC, and within 72 hours for telephone inquiries addressed by the TCC. Overall, upon receiving a complaint, TSA headquarters units and local TSA airport staff may address the complaint directly or refer it to other offices for review and resolution after determining which one has the necessary expertise and knowledge to address the alleged incident. For example, according to TSA officials, if an air passenger submits the complaint through the TCC, TCC staff attempt to resolve it by providing a response to the air passenger using pertinent template language that explains TSA policy and screening procedures. Alternatively, the TCC may refer screening complaints for resolution to other TSA headquarters offices, depending on the specific allegation. For example, complaints alleging discrimination on the basis of a disability or medical condition are referred to the Disability Branch. Also, the TCC may forward complaints about customer service to the customer service representative at the airport identified in the complaint for investigation and resolution. Alternatively, if an air passenger submits a complaint directly to TSA staff at the airport, it is the responsibility of these staff members to investigate and resolve the complaint or, if necessary, refer it to TSA units at headquarters, such as the Disability Branch. For example, according to TSA officials, if an air passenger makes a complaint in person at the checkpoint, TSA supervisors and managers are to attempt to resolve the complaint at the checkpoint before the situation escalates. Regardless of whether a complaint is initially received by a TSA headquarters unit or at the airport at which the incident took place, according to TSA officials, local TSA airport officials generally conduct most follow-up investigations since they are well placed to collect additional airport-specific information and interview local staff. However, specific actions taken to investigate and resolve complaints vary by airport. For example, customer service representatives may be involved in reviewing screening complaints, obtaining additional information from the air passengers to determine when and where the incident took place, and reviewing video footage of the incident to help identify additional details of the incident, such as the identity of the screener(s) who may have been involved in the incident and what had actually happened during the incident. If the situation warrants it, the customer service representative may forward the complaint as well as the video footage to the TSA screening supervisor or manager for additional review and action. The supervisor or manager may review the video footage and obtain a statement from the screener to determine what happened during the incident and the extent to which the screener may have been at fault—for example, whether the screener violated TSA standard operating procedures, or behaved unprofessionally or inappropriately toward the air passenger. Depending on the nature and severity of the allegation, TSA airport staff may also elevate the complaint and evidence to the airport’s Assistant Federal Security Director (AFSD) for Screening or to TSA headquarters units, such as the Disability Branch or the Office of Inspections, for formal investigation. If the investigation were to find fault with the screener, the screener’s supervisor or manager is to determine the corrective action to be taken. Corrective actions specified in TSA’s guidance range from mandating the screener to take additional training to correct the behavior to terminating the screener’s employment for multiple repeat offenses or single egregious actions, such as theft of air passenger property. Following the outcome of the investigation and any resulting corrective actions, the TSA headquarters unit or the FSD or his/her staff, such as a customer service representative, is to communicate the status of the resolution to the air passenger—such as by reiterating that TSA procedures were followed or by issuing an apology and informing the air passenger that corrective actions were taken. TSA’s five centralized mechanisms for receiving air passenger screening complaints provide the agency with a significant amount of information it can use to monitor or enhance screening operations. However, TSA does not have agencywide policy, consistent processes, or an agency focal point to guide the receipt of these complaints or to use complaint information to inform management about the nature and extent of the screening complaints to help improve screening operations and customer service. TSA receives and documents screening complaints that air passengers submit through four headquarters units—the TCC, the Executive Secretariat, the Multicultural Branch, and the Disability Branch—as well as the Talk to TSA web-based feedback mechanism, which e-mails the screening complaint information directly to designated TSA airport staff. As shown in figure 3, the number of complaints submitted through these mechanisms fluctuated somewhat from October 2009 through June 2012. The major exception was a very large increase in the number of complaints submitted to three mechanisms in November and December 2010, which may be attributed to several factors, including a November 2010 public opt-out campaign reported by the media to protest the use of Advanced Imaging Technology and enhanced pat-down procedures for screening air passengers. The volume of complaints that TSA received through each of its five main mechanisms varied from October 2009 through June 2012. Also, because these mechanisms use different categories for screening complaints and have different capabilities for data analysis, we were not able to combine the data from these mechanisms to discuss overall patterns and trends in volume or categories of complaints. A discussion of complaint information in each mechanism follows. The TCC received the bulk of the air passenger screening complaints that the agency documented during this time period. Using TCC data, TSA has reported that it receives about 750,000 public inquiries annually through the TCC and that 8 percent of these inquiries involve air passenger complaints (including complaints about screening). As noted below, however, this information does not include complaint data from other TSA complaint mechanisms. Specifically, the TCC received a total of 39,616 screening complaints that air passengers submitted by e-mail and telephone from October 2009 through June 2012. The TCC divides screening complaints into seven main categories, five having multiple subcategories. Figure 4 shows the total numbers of screening complaints by the seven main TCC categories, such as 17,153 complaints about pat- down procedures. Figure 5 depicts the numbers of screening complaints that the TCC received from October 2009 through June 2012 by the four main TCC categories having the most complaints. As shown in figure 5, the numbers of screening complaints in these four categories remained relatively stable over this period. The major exception was a very large increase in the number of complaints about pat-down procedures in November and December 2010 and continuing periods of a relatively higher level of pat- down complaints through September 2011. As mentioned before, this increase in complaints may be attributed to several factors, including the November 2010 public opt-out campaign reported by the media to protest the use of Advanced Imaging Technology and enhanced pat-down procedures for screening air passengers. The Office of the Executive Secretariat received 4,011 complaints that air passengers submitted by mail. For instance, these complaints include screening complaints related to, among other issues, Advanced Imaging Technology and enhanced pat-down procedures. The Multicultural Branch received 2,899 written screening complaints alleging violations of civil rights and civil liberties, 469 of which it processed as cases. Figure 6 shows the number of cases, by 11 categories, that the branch processed, such as 141 cases related to allegations of discrimination based on race or ethnicity. The Disability Branch received 920 written screening complaints alleging discrimination on the basis of disability and medical condition. From these, the branch processed 1,233 cases. Figure 7 shows the number of cases, by 27 categories, that the branch processed, such as 201 cases related to inappropriate screening. The Talk to TSA web-based mechanism received 4,506 air passenger screening complaints from April 2011 through June 2012. When submitting complaints through this mechanism, air passengers can select up to five complaint categories from a list of 20 possible categories. Figure 8 shows the number of screening complaints by 20 categories that the branch received, such as 1,512 complaints about the professionalism of TSA staff during the screening process. TSA has established five centralized mechanisms for receiving air passenger complaints, but it has not established an agencywide policy, consistent processes, or a focal point to guide receipt and use of this information to inform management about the nature and extent of the screening complaints to help improve screening operations and customer service. With regard to agencywide policy, TSA has not established a policy to guide airports’ efforts to receive air passenger complaints. In the absence of such a policy, TSA officials at airports have wide discretion in how they implement TSA’s air passenger complaint process, including how they receive and document the complaints. For example, at the six airports that we contacted, the use of customer comment cards, which the U.S. General Services Administration (GSA) considers a relatively inexpensive means for government agencies to receive customer feedback, varied by airport. Specifically, customer comment cards were not used at two of the six airports we contacted, according to TSA officials at those airports, while at the other four airports customer comment cards were used to obtain air passenger input in varying ways. At two of these four airports, customer comment cards were on display at counters in the security checkpoints. At the other two airports, neither customer comment cards nor information about the cards was on display, but the cards were available to air passengers upon request, according to TSA airport officials. Passengers who are concerned about being late for their flight or about appearing uncooperative may be reluctant to ask for such cards, however. In addition, when TSA receives a customer comment card, either through air passengers mailing the cards, giving them to TSA screening supervisors or managers, or depositing the cards in a box at the security checkpoint, the card is to go to a customer service representative at the airport. However, TSA does not have a policy requiring that customer service representatives track these comment card submissions or report them to one of TSA’s five centralized mechanisms for receiving complaints if the card includes a complaint. As a result, TSA does not know the full nature and extent of the complaints that air passengers make through customer comment cards. Also, TSA officials reported that the agency does not require TSA airport staff to collect and document information on the screening complaints that air passengers submit in person at the airport level because the agency has given these officials broad discretion in addressing these screening complaints. However, without an agencywide policy to guide the receipt and tracking of screening complaints at the airport level, TSA does not have reasonable assurance that headquarters and airport entities involved in the processes of receiving, tracking, and reporting these complaints are conducting these activities consistently. Further, TSA does not have a process to use all the information it currently collects in its efforts to inform the public of the nature and extent of air passenger screening complaints, monitor air passenger satisfaction with screening operations, and identify patterns and trends in screening complaints to help improve screening operations and customer service. For example, TSA has five centralized mechanisms through which it receives air passenger complaints, but the agency does not combine information from all of these sources to analyze the full nature and extent of air passenger screening complaints. TSA officials have noted that the agency receives about 750,000 contacts per year from the public by e- mail and telephone through the TCC, and that about 8 percent of these contacts are related to complaints. However, this information does not include data on complaints received through other headquarters units or the Talk to TSA web-based form. We recognize that differences in complaint categories among the various databases could hinder any efforts by TSA to combine the complaint data, which we discuss further below. TSA informs the public of the nature and extent of air passenger screening complaints through the U.S. Department of Transportation’s monthly Air Travel Consumer Report, but the number TSA reports in this publication only includes complaints received through the TCC and does not include the complaints TSA received through its other four mechanisms. The July 2012 report, for example, noted that TSA had received about 900 air passenger screening complaints in May 2012, with screening complaints about courtesy and personal property constituting the bulk of the complaints and screening complaints about processing time and screening procedures constituting the rest of the complaints. Further, TSA is using only the complaints received through the TCC to calculate an air passenger satisfaction indicator in its Office of Security Operations’ Executive Scorecard. According to TSA, the purpose of this scorecard is for FSD management and staff to monitor operational effectiveness of airport security checkpoints and make changes as needed, such as to improve screening operations and customer service. TSA officials further stated that the agency has primarily been using the TCC because the TCC information on air passenger screening complaints is readily available. According to the Assistant Administrator of TSA’s Office of Civil Rights & Liberties, Ombudsman and Traveler Engagement, partly as a result of our review, the agency began channeling information from the Talk to TSA database to the TCC in early October 2012. However, it is unclear whether the agency will compile and analyze data from the Talk to TSA database and its other centralized mechanisms in its efforts to inform the public about the nature and extent of screening complaints. It is also unclear whether these efforts will include data on screening complaints submitted locally through customer comment cards or in person at airport security checkpoints. In addition, as discussed earlier, because TSA does not have a consistent process for categorizing air passenger complaints data, including standardized categories of complaints, it is unable to compile and analyze all of the data to identify patterns and trends. Specifically, each of the five centralized mechanisms has different screening complaint categories and different capabilities to analyze the data. As a result, TSA cannot compile information from all five mechanisms to identify patterns and trends in air passenger complaints and monitor its efforts to resolve complaints on a systemic basis. For example, while the TCC database and the Talk to TSA database each may have categories with identical or similar names, such as Advanced Imaging Technology and pat-downs, other categories are unique to certain databases. For instance, the TCC database does not have categories or subcategories corresponding to the Talk to TSA categories of carry-on property out of view, permitted/prohibited items, expert traveler and family lanes, or liquids, among others. As a result, TSA cannot combine the data from different databases to identify whether particular aspects of the screening experience may warrant additional attention or whether TSA’s efforts to improve customer service are having any effect on the number of complaints. Standards for Internal Control in the Federal Government calls for agencies to develop control activities, such as policies, procedures, techniques, and mechanisms that enforce management’s directives. A consistent policy to guide local TSA officials’ efforts to receive, track, and report complaints would help provide TSA reasonable assurance that these activities are being conducted in a consistent manner throughout commercial airports and provide the agency with improved ability to oversee these local efforts. Moreover, a process to systematically collect information on air passenger complaints from all mechanisms, including standardization of the categories of air passenger complaints to provide a basis for comparison, would give TSA a more comprehensive picture of the volume, nature, and extent of air passenger screening complaints and better enable the agency to improve screening operations and customer service. GAO/AIMD-00-21.3.1. interviewed stated that the five mechanisms were designed at different times and for different purposes, and they agreed that the agency could benefit from a consistent complaints policy, a process to collect information from all mechanisms, and a focal point to coordinate these efforts. TSA has several methods to inform air passengers about its processes for making screening complaints; however, as with receipt and use of screening complaint data, it does not have an agencywide policy, guidance, and a focal point to guide these efforts, or mechanisms to share information on guidance and best practices among TSA airport staff to ensure consistency in making air passengers aware of TSA processes for submitting complaints about the screening process. At the headquarters level, TSA’s primary method for providing information to air passengers about TSA screening policies and processes is through the agency’s website. During fiscal year 2012, TSA made improvements to its website to make it easier for air passengers to find information about how to provide feedback to TSA, including compliments and complaints, according to TSA officials. For example, the home page of TSA’s website currently displays an icon that allows air passengers to ask questions or submit feedback directly to TSA staff via an electronic form. The home page also displays an icon that provides information for air passengers to contact the TCC, which receives the majority of documented air- passenger-screening-related complaints, and other TSA units involved in addressing screening complaints. At the airport level, TSA has developed several methods that local TSA staff can use to provide information at the checkpoints for air passengers to submit feedback to TSA, such as displaying signs and stickers and providing customer comment cards that contain information for contacting TSA and that allow air passengers to submit compliments and complaints. Figure 9 shows examples of TSA’s methods for informing air passengers on how to submit feedback to the agency. TSA has developed standard signs, stickers, and customer comment cards that can be used at airport checkpoints to inform air passengers about how to submit feedback to the agency; however, in the absence of agencywide policy and guidance to inform air passengers, FSDs have discretion in how and whether to use these methods. As a result, there was inconsistent implementation of these methods at the six airports we contacted. For example, at one airport we visited, all four checkpoints had visible signs and stickers advertising TSA’s contact information, while at another airport, we did not observe any signs or visible materials at the checkpoints advertising how to contact TSA, and at a third airport, we observed signs that were partially obscured from air passengers’ view. Specifically, at one checkpoint at that third airport, we observed a sign with a quick response code for providing feedback to TSA about passengers’ screening experience. However, this sign was placed in a corner away from direct air passenger traffic. Also, as previously discussed, at two of six airports we contacted, customer comment cards were displayed at the checkpoint, while at two other airports customer comment cards were provided only to air passengers who specifically ask for the cards or TSA contact information or who request to speak with a screening supervisor or manager, according to TSA airport officials. As mentioned earlier, passengers who are concerned about being late for their flight or about appearing uncooperative may be reluctant to ask for such cards, however. At the remaining two airports, customer comment cards were not used, according to TSA officials at those airports. Representatives from four of the eight aviation industry groups that we interviewed also stated that the type and amount of information provided to air passengers about feedback mechanisms, such as how to submit complaints, vary among airports. TSA airport officials we interviewed at three of the six airports we contacted stated that the agency could take additional actions to enhance air passenger awareness of TSA’s complaint processes, such as posting information on shuttle buses or providing fact sheets or brochures to air passengers earlier in the screening process or during airport check-in. For example, an official at one airport suggested that TSA display audio or video materials describing TSA’s complaint process, rather than posting more signs. Also, as we previously discussed, TSA’s screening complaint processes entail taking corrective actions to improve screening systems, procedures, and staff. However, if air passengers wish to submit screening complaints but are not aware of the processes for doing so, air passengers may be less likely to submit complaints to the agency, thus potentially limiting the agency’s efforts to identify systemic issues and take corrective actions or make any needed improvements to the screening process. The Conference Report accompanying the Consolidated Appropriations Act, 2012, directed TSA to make every effort to ensure members of the traveling public are aware of the procedures and process for making complaints about passenger screening. Moreover, Standards for Internal Control in the Federal Government states that in order to ensure effective communication to achieve agency goals, management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency’s achieving its goals. The standards also call for agencies to develop control activities, such as policies, procedures, techniques, and mechanisms that enforce management’s directives. TSA has methods and made efforts to inform air passengers about complaint processes, but opportunities exist to increase air passenger awareness, such as through greater use of the TSA website and brochures or other materials displayed or provided at airport checkpoints, as well as through more consistent implementation of these efforts at airports. TSA officials at four of the six airports we contacted also said that the agency could do more to share best practices among customer service representatives for addressing passenger complaints, including for informing air passengers about complaint processes. For example, TSA holds periodic conference calls for Customer Support Managers to discuss customer service. However, Customer Support Managers have not used this mechanism to discuss best practices for informing air passengers about processes for submitting complaints, according to the officials we interviewed. Also, TSA has not sponsored other information-sharing mechanisms, such as training or conferences, for Customer Support Managers to learn about best practices for informing air passengers, among other things. TSA officials also recognize that passengers may intentionally choose not to submit their complaints to TSA at the airport checkpoint because of the perception that raising a complaint could result in being unfairly subjected to additional screening or being treated rudely by screening officials. In addition, TSA does not have a focal point to coordinate agencywide policy for informing air passengers about complaint processes, or to suggest additional refinements to the overall process for increasing air passenger awareness of the complaints mechanisms. Accordingly, greater awareness of TSA complaint processes could help alleviate passengers’ potential reluctance to submit complaints at the checkpoint. An agencywide policy to inform the public about the processes for making complaints, a focal point for developing this policy and guiding TSA’s efforts to implement it, and mechanisms for sharing best practices among local TSA officials could help provide TSA reasonable assurance that these activities are being conducted in a consistent manner across commercial airports and help local TSA officials better inform the public by learning from one another about what practices work well. TSA’s complaint resolution processes do not fully conform to standards of independence established to help ensure that these types of processes are fair, impartial, and credible. Specifically, at the airport level, TSA officials who are responsible for resolving air passenger complaints (referred to in this report as complaint investigators) are not independent of the TSA airport staff who are the subjects of the complaints. Instead, complaint investigators are generally located in the same airport and report to the same chain of command as the individuals who are cited in the complaints. As previously discussed, TSA receives the bulk of the documented screening complaints via the TCC, and, if necessary, these complaints are ultimately investigated and resolved at the local airport level. Under TSA’s process, complaints may be referred to other TSA offices, such as TSA’s Disability and Multicultural Branches. These TSA branches address complaints from all air passengers with disabilities or medical conditions or from air passengers alleging violations of other civil rights or civil liberties. However, all screener-related complaints are investigated at the airport level, even for complaints that are initially referred to the Disability or Multicultural Branch. The American Bar Association Revised Standards for the Establishment and Operation of Ombuds Offices, which can be used to guide federal complaint processes, states that a key indicator of independence is whether anyone subject to the ombudsman’s jurisdiction can control or limit the ombudsman’s performance of assigned duties. Further, the ombudsman is to conduct inquiries and investigations in an impartial manner, free from initial bias and conflicts of interest. Similarly, the U.S. Ombudsman Association advocates that to maintain independence, the ombudsman should have the discretion to prescribe how complaints are to be made, received, and acted upon, including the scope and manner of investigations. Moreover, to ensure impartiality, the ombudsman should absent himself or herself from involvement in complaints where a conflict of interest or the appearance of conflict of interest may exist. These standards maintain that independence and impartiality are important when addressing complaints because they establish confidence that the process is fair and credible. While TSA is not required to comply with ombudsman standards, these voluntary standards can serve as a useful guideline for implementing the core principles of an effective complaint- handling process. In addition, Standards for Internal Control in the Federal Government states that key duties and responsibilities need to be divided or segregated among different people. At all six airports that we contacted, TSA Customer Support Managers stated that they receive air passenger complaints, review video footage of the incident, and communicate with complainants about the status and resolution of their complaints. Customer Support Managers also stated that they do not conduct formal investigations to determine the cause of a complaint or whether the screener involved in the complaint was at fault or the air passenger was misinformed. Rather, at the five airports that we contacted at which TSA has direct responsibility for screening operations, the Customer Support Managers collect information about the facts and circumstances related to the complaint and forward this information to the screener’s supervisory chain. At these five airports, the TSA screener supervisor or manager is responsible for obtaining the screener’s statement and determining fault as well as any corrective actions that may be taken against the screener. However, TSA Customer Support Managers as well as all TSA screening personnel, including TSA screening supervisors and managers, report to FSDs, and are therefore in the same chain of command as the subjects of air passenger complaints. Because FSDs may be concerned about complaints reflecting negatively on their management of TSA screening operations, this raises questions about independence and the appearance of impartiality and their ability to conduct credible, unbiased investigations. Figure 10 depicts a simplified example of the typical reporting structure at airports at which TSA has direct responsibility for screening operations. TSA officials stated that the desire to resolve complaints locally led to TSA’s decision to allow complaint investigators to be located in the same airport with those whom they are investigating. Also, TSA officials noted that resource constraints may limit the agency’s ability to send TSA officials from headquarters offices to conduct independent investigations of complaints at each airport. However, the lack of independence of the complaint investigators creates the potential for a conflict of interest to arise between the investigator and the individual under investigation. For this reason, in accordance with ombudsman standards, it is important for the structure of the complaint process to ensure the independence of complaint investigators in order to maintain impartial investigations, as well as to maintain the appearance of impartiality during investigations, not only to ensure that they are being fair, but also to uphold the credibility of the complaint process. Having a more independent complaint resolution process would better position TSA to make informed and unbiased decisions about complaints and ensure that corrective actions are taken, as needed, against screeners who are reported to have exhibited unprofessional or inappropriate behavior with air passengers. While TSA has an Ombudsman Division that could help ensure greater independence in the complaint processes, it primarily focuses on handling internal personnel matters and is not yet fully equipped to address external complaints from air passengers, according to the head of that division. However, recognizing the importance of independence in the complaint processes, TSA is developing a new process for referring air passenger complaints directly to this office from airports and for providing air passengers an independent avenue to make complaints about airport checkpoint screening. In August 2012, during the course of our review, TSA’s Ombudsman Division began addressing a small number of air passenger complaints forwarded from the TCC, according to the head of that division. TSA also began advertising the division’s new role in addressing passenger screening complaints via the TSA website in October 2012. The Assistant Administrator of TSA’s Office of Civil Rights & Liberties, Ombudsman and Traveler Engagement stated that she expected the Ombudsman Division to begin addressing a greater number of air passenger complaints as a result. According to the Assistant Administrator, the division will not handle complaints for which there exists an established process that includes an appeal function, such as disability complaints or other civil rights or civil liberties complaints, in order to avoid duplication of currently established processes. Since the external function of the Ombudsman Division has not yet been fully implemented, it is too early to assess the extent to which this new function of the complaints resolution process will conform to professional standards for organizational independence, and help mitigate possible concerns about impartiality and objectivity. TSA is also in the process of developing a Passenger Advocate Program, which the agency plans to begin implementing by January 2013, according to the Assistant Administrator of TSA’s Office of Civil Rights & Liberties, Ombudsman and Traveler Engagement. This program will entail training selected TSA airport staff to take on a collateral passenger advocate role, according to that official. Passenger advocates will respond in real time to identify and resolve traveler-related screening complaints quickly, consistent with TSA policies and screening procedures, according to the Assistant Administrator. Advocates will also respond to air passenger requests, assist air passengers with medical conditions or disabilities, and be prepared to assist air passengers who provide advance notification to TSA via the national TSA Cares helpline. According to the Assistant Administrator, the Passenger Advocate Program will work in conjunction with the new external complaint function of the Ombudsman Division and provide air passenger advocates with the option to refer air passengers directly to the Ombudsman Division. Because passenger advocates are to serve under the FSD chain of command, this arrangement also raises questions about whether there is a lack of independence between passenger advocates and the subjects of air passenger complaints. The Assistant Administrator explained that any perception of lack of independence would be addressed by training passenger advocates to explain to air passengers that they may submit complaints directly to the Ombudsman, who is outside of the airport chain of command. Because this program has not yet been approved by the TSA Administrator or implemented, it is too early to assess the extent to which passenger advocates will help mitigate possible concerns about impartiality and objectivity in the complaint processes. According to available data, TSA receives a relatively small number of complaints considering the millions of air passengers the agency screens each month. However, the agency’s ability to understand the full nature and extent of those complaints is limited because TSA does not systematically collect some of the screening complaint data at the airport level, uses only some of the data it has available to it in its reports and analysis, and collects the data in a manner that makes it difficult for the agency to aggregate and analyze the data for trends. Further, the inconsistent nature of implementation of the screening complaint processes at commercial airports limits TSA’s ability to oversee these efforts. Thus, a policy to consistently guide agencywide efforts to receive, track, and report air passenger screening complaints would help provide TSA reasonable assurance that TSA headquarters and airport entities are conducting these activities consistently. Moreover, a consistent process to systematically analyze information on air passenger screening complaints from all mechanisms for receiving complaints, including standardized screening complaint categories and capabilities for data analysis, would give TSA a more comprehensive picture of the volume, nature, and extent of air passenger screening complaints and better enable the agency to improve screening operations and customer service. In addition, designating a focal point for developing and coordinating agencywide policy on air passenger screening complaint processes, guiding the analysis and use of the agency’s screening complaint data, and informing the public about the nature and extent of screening complaints would help ensure that these efforts are implemented consistently throughout the agency. Finally, TSA has a number of methods to inform the public about its processes for submitting screening complaints, but does not have an agencywide policy to guide these efforts or mechanisms for sharing best practices for informing air passengers about screening complaint processes, which could help TSA staff—particularly at the airport level—better inform the public by learning from one another about what is working well. To improve TSA’s oversight of air passenger screening complaint processes, we recommend that the Administrator of TSA take the following four actions, consistent with standards for internal control, to establish a consistent policy to guide agencywide efforts for receiving, tracking, and reporting air passenger screening complaints; establish a process to systematically compile and analyze information on air passenger screening complaints from all complaint mechanisms; designate a focal point to develop and coordinate agencywide policy on screening complaint processes, guide the analysis and use of the agency’s screening complaint data, and inform the public about the nature and extent of screening complaints; and establish agencywide policy to guide TSA’s efforts to inform air passengers about the screening complaint processes and establish mechanisms, particularly at the airport level, to share information on best practices for informing air passengers about the screening complaint processes. We provided a draft of this report to the Department of Homeland Security (DHS) for comment. DHS, in written comments received October 16, 2012, concurred with the recommendations and identified actions taken, under way, or planned to implement the recommendations. Written comments are summarized below, and official DHS comments are reproduced in appendix I. In addition, DHS provided written technical comments, which we incorporated, as appropriate. In response to our recommendation that TSA establish a consistent policy to guide agencywide efforts for receiving, tracking, and reporting air passenger screening complaints, DHS concurred with the recommendation and stated that TSA would review current intake and processing procedures at headquarters and in the field and develop policy, as appropriate, to better guide the efforts of headquarters and field locations in receiving, tracking, and reporting air passenger screening complaints. We believe that these are beneficial steps that would address our recommendation, provided that the resulting policy refinements improve the existing processes for receiving, tracking, and reporting all air passenger screening complaints, including the screening complaints that air passengers submit locally at airports through comment cards or in person at security checkpoints. In response to our recommendation that TSA establish a process to systematically compile and analyze information on air passenger screening complaints from all complaint mechanisms, DHS concurred with the recommendation and stated that TSA, through the TCC, is taking steps to increase its analysis of passenger complaint information and will build on this effort to further compile and analyze information on air passenger screening complaints. However, DHS did not provide additional details on the steps TSA is taking, so we cannot comment on the extent to which these steps will fully address our recommendation. In its technical comments, TSA stated that the agency began channeling information from the Talk to TSA database to the TCC on October 3, 2012, and we updated our report accordingly. However, it is still unclear whether TSA will compile and analyze data from the Talk to TSA database and its other centralized mechanisms in its efforts to inform the public about the nature and extent of screening complaints and whether these efforts will include data on screening complaints submitted locally at airports through customer comment cards or in person at airport security checkpoints. It is also unclear how TSA will address the difficulties we identified in collecting standardized screening data across different complaint categories and mechanisms. As highlighted in our report, establishing a consistent process to systematically compile and analyze information on air passenger screening complaints will help provide TSA with a more comprehensive picture of the volume, nature, and extent of air passenger screening complaints and better enable the agency to improve screening operations and customer service for the traveling public. In response to our recommendation that TSA designate a focal point for the complaints identification, analysis, and public outreach process, DHS concurred with the recommendation and stated that the Assistant Administrator for the Office of Civil Rights & Liberties, Ombudsman and Traveler Engagement is the focal point for overseeing the key TSA entities involved with processing passenger screening complaints. We are encouraged that the agency has identified a focal point for these efforts but note that the Assistant Administrator only oversees the TSA’s complaint-related processes in the Office of Civil Rights & Liberties, Ombudsman and Traveler Engagement. Thus, it will be important for the Assistant Administrator to coordinate with other TSA offices when acting as the TSA focal point to address the weaknesses we identified in our report. For example, as mentioned in DHS’s comment letter, it will be important for the Assistant Administrator to work closely with the office of the Assistant Administrator of Security Operations because this office oversees screening operations at commercial airports and security operations staff in the field who receive screening complaints submitted through customer comment cards or in person at airport security checkpoints. The Assistant Administrator for the Office of Civil Rights & Liberties, Ombudsman and Traveler Engagement will also need to coordinate with the Office of the Executive Secretariat, which is not mentioned in DHS’s comment letter, given the thousands of air passenger complaints that this office receives, as well as with other DHS and TSA offices that have a role in the air passenger complaint processes— including, but not limited to, the TSA Office of Inspections, TSA Office of Legislative Affairs, and the DHS Office of the Inspector General. In response to our recommendation that TSA establish agencywide policy to guide TSA’s efforts to inform air passengers about the screening complaint processes and establish mechanisms, particularly at the airport level, to share information on best practices for informing air passengers about the screening complaint processes, DHS concurred with the recommendation. DHS stated that TSA would develop a policy to better inform air passengers about the screening complaint processes, to include mechanisms for identifying and sharing best practices for implementing these processes at the airport level. We will continue to monitor TSA’s progress in implementing this recommendation. We are sending copies of this report to the Secretary of Homeland Security, the TSA Administrator, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4379 or at lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on that last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Jessica Lucas-Judy (Assistant Director), Carissa Bryant, and Juan Tapia-Videla made significant contributions to the work. Also contributing to this report were David Alexander, Lydia Araya, Tom Lombardi, Lara Miklozek, and Linda Miller.
TSA, which screens or oversees the screening of over 650 million air passengers per year, has processes for addressing complaints about air passengers’ screening experience at checkpoints, but concerns have been raised about these processes. The Conference Report accompanying the Consolidated Appropriations Act, 2012, directed TSA to ensure the traveling public is aware of these processes and GAO to review TSA’s policies and procedures for resolving passenger complaints. This report addresses the extent to which TSA has (1) policies and processes to guide the receipt of air passenger screening complaints and use of this information to monitor or enhance screening operations, (2) a consistent process for informing passengers about how to make complaints, and (3) complaint resolution processes that conform to independence standards. To address these objectives, GAO reviewed TSA documentation, analyzed complaint data from October 2009 through June 2012, and interviewed TSA officials from headquarters offices and six airports selected for type of security, among other things. The airport interviews are not generalizable but provide insights. The Transportation Security Administration (TSA) receives thousands of air passenger screening complaints through five mechanisms, but does not have an agencywide policy or consistent processes to guide receipt and use of such information. For example, from October 2009 through June 2012, TSA received more than 39,000 screening complaints through its TSA Contact Center (TCC). However, the data from the five mechanisms do not reflect the full nature and extent of complaints because local TSA staff have discretion in implementing TSA's complaint processes, including how they receive and document complaints. For example, comment cards are used at four of the six airports GAO contacted, but TSA does not have a policy requiring that complaints submitted using the cards be tracked or reported centrally. A consistent policy to guide all TSA efforts to receive and document complaints would improve TSA's oversight of these activities and help ensure consistent implementation. TSA also uses TCC data to inform the public about air passenger screening complaints, monitor operational effectiveness of airport security checkpoints, and make changes as needed. However, TSA does not use data from its other four mechanisms, in part because the complaint categories differ, making data consolidation difficult. A process to systematically collect information from all mechanisms, including standard complaint categories, would better enable TSA to improve operations and customer service. TSA has several methods to inform passengers about its complaint processes, but does not have an agencywide policy or mechanism to ensure consistent use of these methods among commercial airports. For example, TSA has developed standard signs, stickers, and customer comment cards that can be used at airport checkpoints to inform passengers about how to submit feedback to TSA; however, GAO found inconsistent use at the six airports it contacted. For example, two airports displayed customer comment cards at the checkpoint, while at two others the cards were provided upon request. Passengers may be reluctant to ask for such cards, however, according to TSA. TSA officials at four of the six airports also said that the agency could do more to share best practices for informing passengers about complaint processes. Policies for informing the public about complaint processes and mechanisms for sharing best practices among local TSA officials could help provide TSA reasonable assurance that these activities are being conducted consistently and help local TSA officials learn from one another about what practices work well. TSA's complaint resolution processes do not fully conform to standards of independence to ensure that these processes are fair, impartial, and credible, but the agency is taking steps to improve independence. Specifically, TSA airport officials responsible for resolving air passenger complaints are generally in the same chain of command as TSA airport staff who are the subjects of the complaints. TSA is developing a new process that could help ensure greater independence by TSA units referring air passenger complaints directly to its Ombudsman Division and by providing passengers an independent avenue to make complaints to that division. TSA also plans to initiate a program by January 2013 in which selected TSA airport staff are to be trained as passenger advocates as a collateral duty. It is too early to assess the extent to which these initiatives will help mitigate possible concerns about independence. GAO recommends that TSA, among other actions, establish (1) a consistent policy for receiving complaints, (2) a process to systematically analyze information on complaints from all mechanisms, and (3) a policy for informing passengers about the screening complaint processes and mechanisms to share best practices among airports. TSA concurred and is taking actions in response.
For audit purposes, IRS splits large corporations (those reporting $10 million or more in gross assets) into two groups. Of the 46,700 large corporations in 1994, IRS placed about 1,700 corporations, usually exceeding $250 million in assets, into CEP. IRS audits the large corporations not in CEP (hereafter referred to as “large corporations”) under the Examination Division’s general program. The Examination Division audits tax returns to determine whether taxpayers paid the correct amount of tax. As discussed later in detail, IRS audit staff are to take various steps before auditing a return. First, the staff must classify and select a return for audit. IRS classifies returns to highlight tax issues (e.g., income, deductions, credits) that should be audited. Then, an IRS revenue agent is to plan how to audit such issues and collect information from the large corporation, as needed. If the corporation does not provide all requested information in a reasonable period without a valid excuse, IRS may issue a legal summons to compel the taxpayer to comply. The Department of Justice works with IRS to enforce the summons in court. For each audit issue, if the revenue agent views this information as insufficient support for the position taken on the return, the agent is to recommend adjustments to the return and compute a corrected tax liability. On the other hand, if the information supports the return filed by the large corporation, the agent is to recommend no tax change. The revenue agent presents the audit results to the large corporation officials, who may either agree or disagree. If the large corporation agrees, any additional tax that the revenue agent recommended becomes assessed. If the large corporation disagrees, it may file a protest with IRS’ Office of Appeals, which is tasked with settling tax disputes without litigation on the basis of what is fair to the government and the taxpayer. An appeals officer is to evaluate the relative strengths of the government’s and taxpayer’s positions by reviewing the facts, including additional information provided by the taxpayer, pertinent court decisions, and the results of informal conferences with the taxpayer. To settle a tax dispute, an appeals officer can consider the hazards of litigation. The officer is then to negotiate mutual concessions in an attempt to arrive at a settlement. If a case is settled, any additional tax is assessed and the appeals officer is to prepare an Appeals case memorandum, or written summary, of how the case was handled. The summary is to include the issues raised; pertinent facts; applicable regulations, rulings, and court decisions; and the merits and hazards of litigation of each side. If a case is not settled, Appeals is required to issue a notice of deficiency and the taxpayer has 90 days to file a petition with the Tax Court. Even after a case is docketed in court, IRS District Counsel, by itself or by reengaging Appeals, may attempt to settle the case prior to trial. IRS data showed that in fiscal year 1992, Examination sent 2,235 large corporate cases to Appeals. As of late fiscal year 1995, Appeals had settled about 1,800 of those cases. Of those not settled by Appeals, three were settled by District Counsel, two were settled by trial, and the remainder were still open in Appeals. Our objective was to determine what factors affected the results of auditing large corporations as well as the amount of additional taxes recommended in these audits that are ultimately assessed. To accomplish our objective, we used two methodologies. First, we sent questionnaires to IRS revenue agents, IRS appeals officers, and corporate taxpayers associated with a nationally representative sample of audits in which a large corporation agreed with the additional recommended taxes at the end of either the audit or appeals processes during fiscal year 1994. To focus on larger audits, we restricted this questionnaire study to the universe of large corporate audits with $75,000 or more in recommended additional taxes and concentrated about one-third of the sample in a stratum with recommended taxes of $1 million or more. Our sample of 500 included about $2.3 billion of the $2.6 billion in recommended additional taxes and $648 million of the $810 million in taxes assessed from the 1,266 large corporate audits in our universe. Appendix I provides a detailed description of our sample selection methodology. We also sent a more general questionnaire to IRS group managers because, being responsible for many types of audits, they were not as likely as the above respondents to recall information about specific audits of large corporations. We randomly sampled group managers nationwide who had large corporate audits in their inventories as of August 1995. Questionnaire results for revenue agents, group managers, and appeals officers are presented in appendices II through IV, respectively. Because the questionnaires were sent to a sample rather than all members of their respective universes, all of the sample results are subject to sampling error. Unless otherwise noted, all estimates presented in this report have a 95 percent confidence interval of less than plus or minus 10 percent. Questionnaire results for the large corporations are not included because of a low response rate that did not allow us to develop estimates. Second, we obtained input from various IRS staff. We visited IRS’ National Office, its 4 regional offices, and 7 of its 33 district and appeals offices to interview key officials. During the design phase of our review, we visited three additional districts and two additional appeals offices. In the National Office, we contacted officials in the Examination Division, National Appeals Office, and the Strategic Planning Division. In conjunction with our site visits, we interviewed selected Appeals, District Counsel, and Examination officials to obtain their views. Appendix V lists all locations visited and the officials interviewed at each location. In addition, we asked the Examination Chiefs in all 33 IRS district offices nationwide and Appeals Chiefs in all 33 appeals offices nationwide to give us their written comments on certain factors related to these large corporate audits. We received responses from 31 (94 percent) of the Examination Chiefs and 30 (91 percent) of the Appeals Chiefs. Their views are incorporated throughout this report. We obtained oral comments on a draft of our report from IRS and the Tax Executives Institute (TEI). We discuss such comments and our evaluation of them at the end of this letter. Overall, we conducted our work at IRS’ National Office, 4 regional offices, 10 of the 33 district offices, and 9 of the 33 appeals offices. In addition, we used questionnaires received from all of IRS’ 33 district offices. We also asked Examination Chiefs and Appeals Chiefs nationwide to give us their comments on factors related to these large corporate audits. We did our work from May 1995 to November 1996 in accordance with generally accepted government auditing standards. Our 1995 report on large corporate audit trends provided statistics on IRS audits of large corporations between fiscal years 1988 and 1994. These statistics covered the audit results and assessment rate over this 7-year period. Neither we nor IRS knows what the assessment rate should be, but these statistics indicate that IRS has been investing a lot of time and money recommending additional taxes that do not get assessed. For example, in a comparison of data for 1988 and 1994, we found that IRS invested more resources in large corporate audits but recommended less additional tax per hour. IRS spent 25 percent more hours and audited only 3 percent more returns. Even so, the amount of taxes recommended (in constant dollars) dropped 23 percent per audit hour and 7 percent per audited return. In addition, IRS’ no-change rate doubled from 8 percent to 16 percent. Further, for the 7-year period, we computed that IRS assessed, on average, 27 percent of the additional taxes that IRS revenue agents recommended in these audits. The assessment rate includes the amount of recommended taxes that the large corporations agreed to pay at the end of the audit as well as those amounts sustained after any appeal. Over the 7 years, large corporations appealed between 66 and 85 percent of the additional taxes recommended and agreed to pay the rest. Since 1990, corporate taxpayers have been appealing a lower percentage of the recommended taxes and agreeing to a higher percentage. We identified four factors that affected the assessment rate and/or audit results, such as the lower recommended taxes per audit hour in 1994 compared to 1988. Although the exact impact is unknown, each factor can affect both the rate and results. For example, three of the four factors—complex tax laws, conflicting performance measures between Examination and Appeals, and limited coordination between these two IRS functions—can produce a lower assessment rate. As for lower audit results, the three factors can each have a different impact. Complex laws cause IRS’ audits to be very time consuming, which can lower the amount of recommended taxes per hour. Although Examination’s performance measures would encourage higher amounts of recommended tax, Appeals measures would not be as likely to affect the audit results. Limited coordination between Appeals and Examination was unlikely to affect the audit results being disputed by corporations because the audits had already been done. On the other hand, future audit results on similar tax issues were likely to be reduced when revenue agents did not receive feedback on which disputed issues were conceded and why; such knowledge could enhance future audits. The fourth factor entailed a number of aspects of an audit that could reduce the taxes recommended per audit hour or the assessment rate. Because revenue agents generally worked alone without much assistance from counsel or their management, they needed more time to develop enough support to recommend taxes that could be assessed after an appeal. These agents also did not have a sufficient basis for selecting corporate tax returns with potential for significant tax changes. Generally, audits of returns with low potential were more likely to result in recommendations for little or no tax change and were less likely to be appealed. Thus, these audits would generally have little effect on the assessment rate. However, when a revenue agent tried to recommend taxes without sufficient support, such recommended taxes would not likely be sustained in Appeals, and the assessment rate would be lower. The following sections discuss each of these four factors in more detail. IRS and large corporate taxpayers can have legitimate differences over how tax laws should be interpreted. We found that complex, ambiguous laws have created opportunities for both large corporations and IRS to interpret the tax laws differently. This discretion, in turn, increased the likelihood of tax disputes. Without clear tax laws, resolution of these disputes can get complicated and can ultimately depend on the negotiating skills of the IRS and corporate representatives. Because the corporate representatives have usually prevailed in Appeals or the courts, recommended additional taxes have tended not to be assessed. We have previously reported that the federal tax laws are complex, difficult to understand, and in some cases indecipherable. Some of the large corporate officials who responded to our survey indicated that a major reason for disputing recommended taxes was revenue agents’ interpretation of tax laws. We estimate that revenue agents judged that about 86 percent of the corporate tax disputes were due to different interpretations of the tax laws. Appeals officers in our universe cited the hazards of litigation as the primary reason for resolving these interpretive differences in favor of the corporations for an estimated 56 percent of the additional taxes being appealed. The National Director of Appeals told us in a letter that these audits often raise issues involving substantial doubt or variances of opinion because these issues are complex and not definitively answered by litigation. The complex tax laws also affected IRS’ ability to conduct audits, according to 21 of the 33 Examination Chiefs and 27 of the 33 Appeals Chiefs nationwide. Such complexity, in combination with the broad scope of the tax laws, made it difficult for IRS to ensure that its revenue agents stayed current in their tax law knowledge and for large corporations to comply with the tax laws. “Taxpayers are aware of the difficulty of determining with exactness the liability that they have. They are also aware that the courts cannot resolve all disputes arising out of the audit process. Therefore, the Service must pursue the administrative resolution of these cases whenever possible. The fact that the Service is highly motivated to resolve cases without litigation means that compromises on difficult and controversial issues will take place. Knowing this, taxpayers naturally take advantage of the process to dispute those issues on which some doubt exists.” To help resolve problems with tax law complexity and recurring issues in CEP audits, our 1994 report recommended that IRS should more strongly propose changes to the tax laws. IRS agreed and has established a work group to evaluate ways to implement this recommendation. To the extent that IRS is successful in getting Congress to simplify the various complex tax issues, large corporations are likely to benefit as well as IRS. IRS’ overall mission is to collect the proper amount of taxes in a manner that is efficient and fair and promotes public confidence. The Examination and Appeals functions also have important missions that should contribute to IRS’ overall mission. Revenue agents are charged with protecting the government’s interest in receiving the proper amount of tax. They are instructed to make their audit recommendations without deviating from IRS’ legal positions or considering the hazards of litigation (i.e., the chance of losing in court). On the other hand, appeals officers are charged with resolving tax controversies without litigation to the extent possible while being fair and impartial to both the government and the taxpayer. They are instructed to consider the hazards of litigation and may concede the recommended taxes in part or in whole on that basis. Performance measures typically move a function toward desired ends within a mission. In doing so, the performance measures within the two functions reflect their respective missions and may not encourage the functions to work together effectively to accomplish IRS’ overall mission. For example, Examination has traditionally focused on measuring the amount of additional taxes recommended per audit and per audit hour. On the other hand, Appeals has focused on measuring the number of tax disputes settled as quickly as possible without litigation. These different measures have the potential to lead to a lower assessment rate. The audit measures may encourage revenue agents to propose tax adjustments regardless of whether they can be sustained on appeal and discourage agents from fully developing issues because of time pressures to close the audits. Appeals’ measures may encourage appeals officers to settle more cases in less time even when some of the recommended taxes have a justifiable basis under vague or complex tax laws. As a result, a high proportion of recommended taxes may not be assessed, but Examination could claim success for recommending high amounts of taxes and Appeals could claim success for settling the case without litigation. In our 1994 CEP report, we reported a similar situation for CEP audits and recommended that IRS add an IRS-wide measure, such as the collection rate, to the functional measures. Although IRS disagreed with this recommendation when commenting on a draft of the CEP report, IRS officials subsequently told us they plan to implement such an IRS-wide measure in some form during fiscal year 1998. Such a measure could similarly be applied to various types of audits, including audits of other large corporations. An IRS-wide measure such as the collection and/or assessment rate could encourage IRS functions to work together to accomplish IRS’ overall mission of collecting the proper amount of tax. National Office Examination and Appeals officials expressed concerns about possible unintended effects from creating such a measure. For example, they said an overall IRS measure such as the assessment rate could encourage revenue agents to avoid raising difficult audit issues or appeals officers to settle disputes just to drive up the assessment rate. However, this measure of the tax outcomes also would be likely to encourage revenue agents to more fully develop audit issues that could be sustained if appealed. As discussed later, such a measure also could encourage appeals officers to coordinate with Examination while still remaining impartial and independent in settling tax disputes. As measures are emphasized over time, they become ingrained, making changes very difficult. At every location we visited, we heard about the driving force of existing measures from Examination or Appeals officials and the difficulty of changing or adding to them. These officials noted that as new measures are introduced, the culture of the organization will resist change and cling to the past. Many Examination and Appeals managers we contacted also expressed concerns over using an assessment rate as a measure for the large corporate program. In part, they pointed to impurities in IRS’ databases that do not allow them to separate audit actions from nonaudit actions, such as claims or net operating losses. IRS has been developing a new database to help identify these problems and their impacts on the revenue collected due to audits and other enforcement efforts. One case in our sample epitomizes the concerns about the assessment rate being skewed by nonaudit actions. In this case, the revenue agent recommended several hundred million dollars in additional taxes. Appeals sustained 100 percent of the issues and the taxes recommended by the revenue agent. However, the large corporation submitted additional information as well as a net operating loss and other claims during the Appeals process. Appeals accepted and approved these losses and claims. The losses and claims almost completely offset the additional taxes recommended by the revenue agent. As a result, about 1 percent of the recommended taxes was assessed. Until the databases account for them, nonaudit actions that are considered during the Appeals process will continue to overstate or understate the rate at which taxes recommended in audits get assessed. On the other hand, of the 40 regional and district officials we interviewed, 14 told us they supported using the assessment rate. One Appeals Chief told us “The measurement standards would be more appropriately based on dollars ultimately assessed and collected.” In addition, at least one official from each function—Appeals, Counsel, and Examination—in the four regions told us that both Examination and Appeals should be accountable for the assessment rate. Further, 7 of 33 Chiefs of Examination said they already used a cross-functional measure, such as the amount of additional taxes recommended that gets assessed, as an additional way to evaluate audit effectiveness. Audits of these large corporations can be complex and technical but are generally done by a single revenue agent. Although they worked alone, these revenue agents received little assistance from district counsel or their group managers. Also, IRS’ approach for classifying and selecting these large corporate returns did not help ensure that revenue agents spent their audit time on the most noncompliant returns. Finally, the agents had difficulty obtaining information from the large corporations. In combination, these circumstances made it difficult for revenue agents to recommend taxes that had enough support to be assessed without investing a lot of time. IRS officials said the level of large corporate auditing experience for revenue agents was not as high as they would like it to be. For the large corporations in our study, the average return was audited by a single revenue agent with about 8.5 years of corporate auditing experience. IRS has lost about 1,800 experienced revenue agents over the past 3 years. IRS National Office Examination officials as well as regional and district officials interviewed noted that if IRS continues to lose its senior revenue agents without being able to replace them, corporate audits will become less productive. Furthermore, these agents could not easily develop corporate expertise because they generally conducted many other types of audits, such as those of partnerships and individuals. Given the level of experience of these revenue agents and the complexity of the tax law, training in corporate income tax practices and the tax laws is important. In this regard, the revenue agents in an estimated 38 percent of the audits in our study population believed that they needed, but had not received, training that would have improved their ability to conduct their audits. A common need cited was for more industry-related training. Further, 25 of the 33 Examination Chiefs nationwide indicated that additional training in specific industries would enhance audits of complex, technical issues. A regional task force cited a need for additional training so revenue agents could become more proficient in recognizing and developing corporate issues. In February 1997, National Office Examination officials told us they were developing a specific course that will be used to train all revenue agents assigned to large corporate audits. To help guide revenue agents doing large corporate audits, they also planned to have audit criteria and procedures in place by the end of calendar year 1998. However, six Examination Chiefs pointed out the difficulty in providing additional training when training funds have been diverted to other areas because of budget limitations. For example, one Examination Chief told us that for fiscal year 1996 the training budget was cut so severely that Examination could not conduct continuing professional education for revenue agents. National Office Examination officials told us during November 1996 that IRS added $10 million to fiscal year 1997 training funds across IRS, of which Examination received $1.4 million. According to one of the officials, these funds should help Examination provide most, but not all, of the basic continuing professional education training to its revenue agents. Moreover, this official said funding for training is unlikely to improve for fiscal year 1998 under the current budget environment. Working alone on these corporate audits, revenue agents may need assistance in planning and developing their audits. However, we found that revenue agents usually did not request assistance from district counsel or their group managers on planning and doing the audits. Revenue agents for most of the 1,266 audits in our study population said they did not request any legal assistance on matters of tax law or overall issue development. We estimate that revenue agents reported requesting assistance from the Office of District Counsel for about 14 percent of the audits, and from the Office of Chief Counsel for about 8 percent of the audits. However, for an estimated 55 percent of those audits in which revenue agents requested assistance, they judged that such assistance had a positive or very positive effect on their ability to obtain the taxpayers’ agreement. Appeals officers consulted with district counsel during resolution of an estimated 20 percent of the most significant issues raised by revenue agents. For about half of these consultations, the appeals officers indicated that District Counsel helped them to resolve the disputes to a great or very great extent. Our interviews with district office officials identified a major reason for infrequent requests for legal assistance. These officials were concerned about revenue agents and appeals officers not receiving the assistance in a timely manner. Counsel officials in the four districts we visited acknowledged that responding to requests for formal legal assistance can be time-consuming. However, these officials told us they could help improve the effectiveness of the large corporation audits by assisting the revenue agent in developing audit issues and obtaining requested information. They believed that such involvement could be justified and helpful. In February 1997, National Office Examination officials told us that Counsel involvement in CEP cases is working well and support looking for ways to increase Counsel’s involvement in the large corporate cases. However, Counsel officials cautioned that increased involvement would have to be on a selective and informal basis due to staffing constraints. Less than half of the revenue agents in our universe indicated their group managers were involved in identifying audit issues, discussing complex audit issues, obtaining information from the taxpayer, or resolving disputed issues. In well over half of those audits in which revenue agents indicated their managers were involved, the revenue agents indicated that such involvement helped them. For example, we estimated that in 207 of the audits in our population, revenue agents indicated that their group managers were involved in obtaining requested information from taxpayers; in an estimated 83 percent of those audits, the revenue agents viewed such involvement as either very positive or somewhat positive. Examination officials and the regional task force report provided insights on why managers were not more frequently involved in agents’ audits. For example, they said most group managers did not have sufficient experience or time to substantially assist revenue agents. Examination officials from the districts we visited told us that group managers were responsible for many revenue agents and other auditors who audit a range of tax entities, from individual returns through complex corporate returns, that involve different tax rules and issues. Officials said that group managers tended to focus their attention on newer staff and administrative duties. They said that as a result, revenue agents were left to conduct these corporate audits with minimal managerial involvement, and group managers lost the opportunity to develop their corporate audit experience. Both the Examination officials and the regional task force report concluded that these large corporate audits were more effective when group managers with corporate audit experience were actively involved. For example, 20 of the Examination Chiefs nationwide indicated that group manager involvement was crucial to the success of these audits. To increase managerial involvement and audit effectiveness, four districts we visited had recently created groups of existing revenue agents that specialized in large corporate audits. Managers with extensive corporate auditing experience led these groups to help their agents get assistance in selecting, planning, and doing audits. District officials believed that these groups, although fairly new, have improved the effectiveness of large corporate audits because, in part, of the focus and assistance of group managers. IRS’ National Office has not yet issued any uniform guidance on how to measure the success of these groups. Accordingly, not all districts were consistently measuring the impacts; some were focusing on different audit results (e.g., recommended taxes per hour versus no-change rate). National Office Examination officials told us that they would like to learn more about the impacts of these specialized groups across the districts that had created them. In evaluating these groups, it is important to recognize that some districts may not have enough corporate workload or revenue agents to justify these specialized groups. That is, such districts may wish to maintain flexibility in using revenue agents on other than large corporate audits. At least one Examination Chief was concerned about the potential impacts on audit results in the short term. Even so, officials in these districts believed that these specialized groups will ultimately yield better large corporate audit results, cancelling out any initial decline in the results. And, if the districts who were experimenting with such groups maintain a similar level of investment in large corporate audits, shifting the agents into specialized groups would not necessarily increase IRS’ costs or reduce resources for other types of audits. Compared to CEP tax returns, the approach for selecting these large corporate returns was more subjective and varied. To determine which large corporations to select for CEP, IRS scores corporate tax returns on specific criteria, such as corporate structure, assets, and income. IRS does not have a consistent approach or criteria for classifying and selecting tax returns for large corporations not in CEP. The approach and criteria varied by district. In general, revenue agents and/or their group managers selected the returns to audit, depending on the IRS district. Many districts charged revenue agents with both classifying and selecting issues for audit, and some districts had other auditors do the initial selection and classification. Some districts relied on service center staff to classify large corporate returns, using criteria provided by that district, or subjectively without using any such criteria before sending the selected returns to the district. In sum, our analysis of questionnaire responses and our interviews with IRS officials showed that the IRS staff doing the selection and classification had to ultimately rely on their experience and judgment about audit potential. They had limited criteria and little information on (1) any previous audits of the large corporation or (2) overall large corporate audit results by issue and industry to guide their decisions. Some of these staff may be sufficiently experienced to find returns that would be productive to audit. However, the audit results in fiscal year 1994 showed that more returns were audited without any recommended tax changes or with lower amounts of recommended tax per audit hour than in fiscal year 1988. National Office Examination officials have expressed similar concerns about their selection and classification system for large corporate audits. They established a task force to develop a more structured system, but budget constraints have stalled the task force’s efforts. In lieu of the task force, the National Office is testing the benefits of providing additional information on a corporation, such as Securities and Exchange Commission (SEC) reports, to the revenue agent reviewing the corporate return. Examination is also testing potential improvements to the classification system; none of the tests are far enough along to have useable results. Selected IRS districts are testing classification of returns by market segment. Also, IRS is developing the Examination Operational Automated Database in an attempt to capture audit results by issue and industry. Examination officials believe that this database could be used to enhance any selection and classification system by providing feedback on tax issues (e.g., unreported gross receipts, overstated travel expenses) by industry (e.g., manufacturing, wholesale trade) that have proven to be productive to audit. That is, IRS could identify issues and industries in which audits generated more recommended taxes per audit hour. By tracking such audit results, Examination officials believed that this database will be particularly helpful in classifying audit issues. These officials said IRS already had most of the necessary hardware and software. They estimated that enhancements in fiscal year 1997 would cost about $320,000 and that administrative costs would average a staff year per district. This system is being tested in two IRS districts and is expected to be operational by the end of calendar year 1998. Further, IRS officials from some districts with groups specializing in audits of large corporations told us such groups have helped improve the return selection and classification processes at these districts. These groups can improve not only the selection process but ultimately the productivity of these corporate audits. For example, in one district, an Examination official told us that while the overall percentage of audits closed with no additional tax recommended was about 10 percent, the rate within the specialized group was only about 3 percent. Such audits can result in ineffective use of IRS’ as well as the corporations’ resources. During audits, revenue agents may question items on the return, such as income, deductions, or credits. If a corporation cannot provide adequate information as support, the revenue agent may adjust the items, which usually results in additional taxes being recommended. Both the revenue agents and large corporations contributed to problems in obtaining such information. Not having the information hindered IRS’ ability to do effective audits and support tax recommendations. Appeals and Counsel officials in all four districts we visited told us that revenue agents do not always have adequate information to support recommended taxes. Taxpayers provided information to Appeals that had not been provided to the revenue agents in an estimated 53 percent of the disputed audits. Appeals officers for some of the audits noted that revenue agents had provided insufficient information to justify their development of an audit position. For example, appeals officers for an estimated 27 percent of the disputed audits indicated that not all of the top three dollar issues had been fully developed by the revenue agents during the audit. Examination and Appeals officials told us that some corporate taxpayers did not always provide requested information in a timely manner, if at all. Corporations can have difficulty providing information when IRS’ requests are vague, for old data, or made late in the audit. On the other hand, corporations have little incentive to provide all information, particularly if it will lead revenue agents to make adjustments or to audit other areas on the tax return. IRS officials we interviewed believed that problems in obtaining all the information needed to support tax recommendations were becoming more prevalent. Examination, Appeals, and Counsel officials said agents should ensure that they have adequate information to support tax recommendations. They also expressed the opinion that the recently formed specialized groups can increase managerial and counsel involvement in helping revenue agents obtain the information needed to support their recommended taxes. They noted that these group managers, when involved, were usually able to help agents obtain requested information from taxpayers. Counsel officials told us that their involvement, including the discussion and issuance of summons when needed, could help secure information. They noted that revenue agents need to make information requests early in the audit so that the summons process, if needed, can begin as soon as possible, enhancing its effectiveness. IRS generally uses a summons as a last resort, meaning IRS has tried all other administrative means of obtaining requested information. Although used infrequently, a summons can prompt large corporations to provide the requested information. If it does, the investment in time and money can prove to be worthwhile compared to spending time awaiting information that may not be received. During the appeals process for large corporate audits, coordination between Appeals and Examination was limited. Appeals generally did not share with Examination new information from large corporations. Sharing this information would give revenue agents the opportunity to review it and provide their comments to Appeals before the settlement. After the final settlement, Examination did not always distribute Appeals’ summary of that settlement to its revenue agents. Our work showed that such limited coordination resulted from insufficient requirements and incentives to coordinate. Although Appeals’ independence in settling tax disputes is critical, limited coordination between the two functions can hinder IRS’ efforts to reach a balanced settlement as well as to improve future audits. Appeals officers for an estimated 25 percent of the disputed audits indicated they had no interaction with revenue agents while resolving the disputed tax issues. Appeals and Examination officials have acknowledged such limited coordination overall. An Appeals task force draft report cited Examination’s concerns about the current Appeals process not providing Examination with an opportunity to present its views on key issues prior to resolution. Knowing that large corporations usually have unlimited access to the appeals officer to discuss the dispute, Examination officials said limited involvement and coordination with Appeals creates the appearance that the government’s interest is not fairly represented and that the Appeals process is not balanced. This appearance of bias can be aggravated when an appeals officer does not share with Examination staff new information provided by large corporations. Appeals officers for an estimated 53 percent of the disputed audits in our study population indicated that large corporations provided additional factual information for at least one of the top three dollar issues. However, the appeals officers asked Examination to review the new information in 139, or an estimated 43 percent, of those disputed audits in which corporations provided new information. Revenue agents reported a similar lack of coordination. They indicated that Appeals asked them about new information in only an estimated 17 percent of all disputed audits. Neither we nor IRS knows whether the appeals officers should have shared the new information in these cases. Our CEP work indicated that CEP corporations are more likely to win more disputes when they provide information to Appeals that Examination has not had the opportunity to review. In addition, Examination officials told us that Appeals seldom shared the proposed settlement with Examination so that revenue agents could have one last look at how the dispute was to be settled and whether any new information played a part. National Office Examination officials told us in February 1997 that they do not believe it is realistic for Appeals to share proposed settlements in every case. However, Examination wanted the opportunity to review and discuss new information submitted after the audit closed. Two reasons help explain this limited sharing with Examination staff. First, although IRS does require appeals officers to share significant new information with Examination, it left the definition of “significant” to the discretion of each appeals officer, recognizing that sharing all new information would not be realistic. Given the uncertainty over this requirement, Appeals could not ensure that the significant information had been shared. Without a definition of significant and without adequate controls to ensure that all significant new information is shared, neither we nor IRS knew whether the appeals officers involved with our study population had met the requirement for sharing significant new information. Also, IRS did not require Appeals to share its proposed settlements with Examination. Second, the limited sharing partially resulted from the differing roles and incentives driving the work of Examination and Appeals. Appeals Chiefs we interviewed said they encourage appeals officers to involve the revenue agents in reviewing new information but advised their appeals officers to be conscious of the time and costs to do so. That is, if the appeals officers believe they can review the information in a shorter period of time than a revenue agent can, the appeals officers should most likely do it. Our interviews with Examination officials also indicated that many revenue agents have little incentive to spend time reviewing new information on a case that Examination has already closed. Further, both Appeals and Examination officials at the National Office said that sharing all new information would be unnecessary and too time-consuming. In February 1997, these Appeals officials told us they believed much of the new information submitted by taxpayers was not significant. Regardless, sharing significant new information, especially that relating to issues that may not be sustained, would help IRS to maintain its designed separation of duties—revenue agents could audit the new information and appeals officers could focus on settling the entire dispute. To help meet this end, our 1994 CEP report recommended that IRS improve controls to ensure that Appeals provides CEP teams an opportunity to comment on proposed settlements. IRS disagreed at the time, but Appeals subsequently proposed a procedure to promote better communication with Examination and better settlement of key issues in CEP cases. Under that proposal, Examination could identify five key issues in a case nearing settlement and Appeals would not settle the key issues until it had considered feedback from Examination. This way, Examination would have the opportunity to review the proposed settlement and advise Appeals of any significant facts, laws, or other factors that may need further consideration. According to many Examination and Appeals officials we interviewed in the districts, allowing Examination to provide this input could add balance to the appeals process without adversely affecting Appeals’ independence. The proposed procedure also could help ensure that Appeals provides Examination with significant new information that taxpayers submit and an opportunity to comment just prior to settling a case. Recognizing that taking these steps could involve some additional time, both Examination and Appeals officials told us during our field visits in early 1996 that the steps were worth taking. However, in November 1996, National Office Appeals officials told us that IRS had recently decided not to implement testing of this proposed procedure because of concerns by both Appeals officials and large corporations that such a procedure could impede Appeals’ ability to independently settle tax disputes. However, these Appeals officials said that Appeals’ independence would not necessarily have to suffer under this proposal. Regarding final settlements, Appeals has a procedure for sending a copy of the final written summary to Examination, but Examination has no process in place to ensure that this feedback reaches the appropriate revenue agent. Revenue agents indicated that they received the written summary in an estimated 61 percent of the disputed audits. Examination officials and revenue agents told us that this summary can provide insights on why a recommended tax adjustment was or was not sustained on appeal. For example, the summary typically discusses the reasons for settling the disputes, such as hazards of litigation. Without knowledge of significant facts or laws followed in the settlement, the revenue agents lose an opportunity to learn about the types of tax issues involved in the case and the support needed to sustain future tax disputes. In summary, Appeals attempts to provide large corporations with a review of their tax disputes that is independent of Examination or other IRS functions before these corporations decide whether to litigate. However, both Examination and Appeals officials told us that increased coordination and communication could help to improve their working relationship and to correct the appearance of imbalances during appeals without reducing the independence. To illustrate this point, a Regional Chief Compliance Officer told us about the need for more balance whenever large corporations withhold information during the audit but provide that information to Appeals. Examination Chiefs told us more interaction would afford an opportunity for their agents to better explain their recommended taxes as well as any difficulties they may have had in obtaining information to support their recommendations. Our analysis of questionnaire responses and interviews with IRS officials identified at least four factors that contributed to the low assessment rate or decline in audit results for 1988 to 1994. First, complex tax laws impeded revenue agents’ efforts to determine the correct tax liability and appeals officers’ efforts to fairly settle tax disputes. Second, differing performance measures prompted revenue agents to recommend as much tax as soon as possible and appeals officers to settle tax disputes without litigation as soon as possible. We recommended in our 1994 report that IRS more strongly propose legislative changes to reduce tax law complexity and consider cross-functional measures, such as the collection and/or assessment rate. IRS is taking action on both of these recommendations. We make no new recommendations on these issues because our 1994 recommendations can also apply to audits of other large corporations. Third, various aspects of the audit process impeded revenue agents’ ability to develop recommended taxes that can survive appeals. IRS recognized these aspects but faced constraints in surmounting them. Budget pressures limited the use of team auditing to buttress agents’ lack of expertise in auditing large corporations. The broad and complex nature of tax administration complicated efforts to carve out more time for group managers and district counsels to formally assist revenue agents—who often work alone without much assistance. Revenue agents viewed such assistance, whether formal or informal, as helpful in identifying and discussing audit issues, requesting corporate information, and pursuing requests that have not been answered. Further, IRS initiated efforts, such as a task force to study ways to improve return selection and classification, but these efforts stalled due to budget constraints. Some IRS districts have taken a step that could address many of these problems. They have combined senior revenue agents and managers into groups that specialize in large corporate audits. Examination officials in districts that created these groups believed that their initial experiences indicated that the groups helped improve return selection and classification, information gathering, and audit productivity. They also believed that the groups allowed managers and agents to share knowledge and assistance in a focused, timely way. However, the districts generally had limited data on the actual impacts of these groups, and IRS’ National Office has not provided criteria or oversight to guide the measurement of the impacts. National Office Examination officials said they would like to learn about the impacts of these groups across the districts. Fourth, the Appeals and Examination functions did not always share information. Unlike CEP teams that have an ongoing audit presence, revenue agents who audit these large corporations move on to other audits. We recognize that sharing all information would not be realistic; however, Appeals could inform Examination officials of any new information that would cause the appealed issues to not be fully sustained. Doing so would help IRS to maintain the intended separation of duties. Examination could have an opportunity to audit the new information and Appeals officers could then focus on their responsibility for settling the entire dispute. After a dispute was settled, Examination did not have a system for regularly sharing Appeals’ summaries of the final settlements with revenue agents. Knowing about the final settlement could help agents to learn about and support tax issues that could sustain appeals. For any form of enhanced sharing, maintaining Appeals’ independence would be paramount. In recommending improvements, we tried to recognize the costs and constraints to IRS. Most of our recommendations will entail limited costs. For example, providing more specific, objective guidance and criteria on return selection need not be an expensive proposition, particularly if the new database on audit results helps to identify the types of large corporations and tax issues that have proven productive to audit. The use of more informal legal assistance would create some costs, but that assistance could be provided more quickly and at less cost than formal assistance. Further, providing more structure and guidance to districts on evaluating the impacts of the specialized audit groups should not cost much and could provide big dividends if IRS had more certainty about the impacts of these groups on the productivity of large corporate audits. Appeals’ sharing of significant new information with Examination could add some time to resolving the disputes, but that investment should be worthwhile if the revenue agents learn how to do better audits or help to determine the correct tax liability. Even if some costs increase, the accompanying improvements should help IRS to better invest its limited enforcement funds in trying to ensure that large corporations are paying the correct amount of taxes. To improve the audits of tax returns filed by large corporations, we recommend that the IRS Commissioner provide more specific objective criteria and procedures to guide the selection of large corporate tax returns and classification of tax issues with high audit potential across the districts; develop criteria and procedures to guide the evaluation across the districts of the impacts of groups specializing in audits of large corporations; encourage District Examination management to work with District Counsel officials on finding cost-effective ways to provide revenue agents with the necessary legal assistance; require Appeals to notify Examination of new information received from a large corporation that could cause the appealed issues to not be fully sustained, and require Examination to (1) indicate whether it wishes to review the new information and, if so; (2) review the information and notify Appeals of the results of the review as soon as possible; and require Examination management to provide feedback to its revenue agents on the final settlements that Appeals reaches with large corporations. We obtained comments on a draft of this report in a meeting on February 20, 1997, with IRS officials who represented you. These officials included a representative of the Commissioner’s Office of Legislative Affairs, a representative of the Chief of Staff to the Assistant Commissioner of Examination, as well as representatives of the Large Business Examination Programs, and representatives of the National Director of Appeals. In general, they agreed with our findings and conclusions and provided a few technical comments on specific sections of the draft. We have incorporated these comments, such as on additional training funds for revenue agents, Appeals’ discretion to share significant new information, and performance measures, in the sections of the report where appropriate. As for our five recommendations, IRS agreed to implement four, as discussed below. First, IRS officials said they have already started to analyze closed large corporation audits to develop an objective system for better classifying and selecting large corporation returns to audit. IRS plans to begin testing this system in selected districts within each IRS region by the summer of 1997 and to implement it by the end of 1998. Second, IRS officials said they plan to develop criteria and procedures to guide the evaluation of the district groups that specialize in audits of large corporations. IRS hopes to finish these actions during 1998. Third, IRS officials said they plan to issue an IRS-wide memo by May 1997 to encourage district Examination management to work with District Counsel officials on finding cost-effective ways to provide revenue agents with the necessary legal assistance, including the use of field service advice and technical advice memoranda. Fourth, IRS Examination management said it plans to change the Internal Revenue Manual to require that revenue agents be provided with feedback on Appeals’ final settlements with large corporations. Because the next series of changes to the Manual will not be done until the end of fiscal year 1997, Examination officials plan to issue a memorandum on this requirement during May 1997. IRS officials did not agree to implement the fifth recommendation that would require Appeals to share its proposed settlements with Examination so that Examination could see whether the large corporation provided new information that affected the settlement. Examination officials said they want to see significant new information, but requiring Appeals to share all proposed settlements may be too formalized and too strong a process for obtaining the new information. Appeals officials expressed concern that sharing proposed settlements could create perceptions that Appeals’ settlement authority would be subject to an Examination veto. This perception could prompt large corporations to close off Examination’s reinvolvement by taking the dispute to court. They also believed that this sharing would add time to the settlement process that usually would be significant and would not change the final settlement. Finally, they believed that reinvolving Examination could produce an adversarial relationship to the extent that appeals officers felt pressured to justify their settlement proposals. We also asked TEI to provide comments on the same draft report. We met with TEI officials on February 21, 1997, to obtain their comments. They also supported or had no opposition to the same four recommendations that IRS agreed to implement. Although we made no recommendations on these topics, they supported creating an IRS-wide performance measure and more training for revenue agents as well as applying CEP processes to non-CEP audits. Like IRS, they expressed similar concerns with the recommendation on sharing proposed settlements with Examination so that it could see how new information affected the settlements. They also expressed the concern that sharing the proposed settlement may prompt Examination to go beyond the new information and try to re-audit other issues. In recommending that Appeals share proposed settlements to allow Examination to see whether new information significantly affected the settlement, we did not intend to undercut Appeals’ settlement authority or grant Examination veto power over settlements; in fact, our draft report pointed to the importance of retaining Appeals’ independence in settling disputes. Thus, we did not envision that the act of sharing would require a highly formalized process or much time in the majority of cases. Rather, our intent was, and still is, to provide an inducement for appeals officers as well as large corporations to share significant information with Examination. We believed that some control or check was needed to better ensure that Examination had the opportunity to play its appropriate role in reviewing information to determine the correct tax liability and protect the government’s revenue. We intended that the requirement to share would provide a control over the appeals officers’ use of discretion in judging the need to share new information. We also intended that this requirement would send a signal that large corporations cannot intentionally bypass the audit process by providing new information to appeals officers during negotiations over tax liability. Our focus on the need for a control stems from responses to our questionnaires and to our interviews with district office officials during 1996. Although Examination officials recognized that communication with Appeals has been improving, Examination officials and staff still pointed to instances in which they did not have a chance to review significant new information that a large corporation had provided to Appeals. In some cases, they noted that they had asked for similar information during the audit. Even so, we understand the concerns expressed by Appeals and TEI officials about sharing the significant new information through the proposed settlements. We discussed several other ways to address the concerns and still have IRS provide a control over Appeals’ sharing of new information with Examination. These discussions prompted us to change our recommendation on how to better ensure that Examination has an opportunity to review the new information. Under our changed recommendation, Appeals would notify Examination as soon as possible after a large corporation provided new information that could cause the disputed issues to not be fully sustained. Upon notification, Examination could choose to do nothing, ask for details, or ask to review the information. Examination and Appeals would need to develop procedures on how much time Examination has to request and review the information, how the information would be shared, how extensive the review would be, and how the results of the review would be communicated to Appeals. We believe that this option would provide Examination the opportunity to fulfill its intended roles—determine the correct tax liability and protect the government’s revenue—while mitigating the concerns raised by Appeals and TEI. As we envision it, this recommendation would not delay or disrupt many final settlements because the information would be shared soon after being received. One exception, of course, would be if the information was significant enough and the review was revealing enough to change the settlement that the appeals officer would have made without Examination’s involvement. Even with this exception, settlement authority would still rest with the appeals officers. This report contains recommendations to you. As you know, the head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on the recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this letter. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this letter. Copies of this report are being sent to the Chairmen and Ranking Minority Members of the House Committee on Ways and Means and the Senate Committee on Finance, various other congressional committees, the Director of the Office of Management and Budget, the Secretary of the Treasury, and other interested parties. We also will make it available to others upon request. Major contributors to this report are listed in appendix VI. Please contact me on (202) 512-8633 if you or your staff have any questions about this report. This appendix describes how we identified our universe of large corporate audits closed agreed in Examination or Appeals during fiscal year 1994 and our sampling methodology. In addition, it discusses our methodology for developing and administering questionnaires to IRS audit and Appeals staff and taxpayers for our sample. In order to send questionnaires to IRS audit and Appeals staff and taxpayers, we identified a universe of corporate taxpayers related to corporate audits closed agreed in Examination or Appeals during fiscal year 1994. We chose fiscal year 1994 for two reasons. First, it provided us with the most recent cases closed agreed in Examination or Appeals. Second, IRS revenue agents and appeals officers and taxpayers would be more likely to recall specific case information on the most recent closed cases. Our computer analysis of IRS’ databases identified a total population of 1,266 audits closed in fiscal year 1994 with $75,000 or more in additional taxes recommended. Table I.1 shows the division of the 1,266 audits by additional taxes recommended. Dollars assessed (in millions) We determined that a survey of the revenue agents, appeals officers, and taxpayers associated with a nationally representative, stratified random sample of 500 audits would be sufficient to accomplish our objective. The sample is divided into six strata based on the assessment rate and the amount of additional taxes recommended. Since those audits with the greatest amount of dollars recommended have the most affect on the assessment rate, the sample includes a relatively large number of the larger dollar cases. We included in our sample all of the 117 audits with $3 million or more in additional taxes recommended; 133 of those audits with between $1,000,000 and $2,999,999 in additional taxes recommended; 120 of those audits with between $300,000 and $999,999 in additional taxes recommended; and 130 of those audits with between $75,000 and $299,999 in additional taxes recommended. The 500 cases in our sample accounted for $2.3 billion, or 88 percent, of the total $2.6 billion in additional taxes recommended in our population. Similarly, the $648 million in additional taxes recommended that were assessed accounted for 80 percent of the total $810 million assessed from the corporate audits shown in table I.1. In the study analyses, the sample selections have been properly weighted to represent the total population of 1,266 audits with $2.6 million in recommended additional taxes. Because group managers are responsible for a large number of audits of different entities, not just corporations, we sampled these managers without respect to their involvement in any particular audit. To do this we asked the 63 district offices to identify all group managers having large corporate audits in their inventories as of August 1995. The districts identified 555 group managers meeting this criterion. From this universe we randomly selected a sample of at least a third of the group managers at each of the 63 district offices. This resulted in a total sample of 215 group managers. In our analyses, the 215 sample selections have been properly weighted to represent the total population of 555 group managers. We developed four mail-out questionnaires to obtain the views of IRS revenue agents, appeals officers, group managers, and corporate taxpayers on the factors affecting the audit and appeals processes, such as obtaining needed information, the effect of the tax laws, and the interaction between Appeals, Counsel, and Examination staff involved with these audits. We pretested the questionnaires on several separate occasions for technical accuracy. We tested the revenue agent and group manager questionnaire in the Baltimore, Chicago, and St. Louis District Offices; the appeals officer questionnaire in the Baltimore and St. Louis Appeals Offices; and the taxpayer questionnaire in the St. Louis District Office. In addition to these pretests, we asked National Office Examination and Appeals officials to review, for technical accuracy, all questionnaires for IRS staff. We asked the Tax Executives Institute (TEI) officials to review the taxpayer questionnaire for technical accuracy. From comments received from both IRS and TEI, we made changes to the questionnaires as appropriate. In August 1995, we sent letters to IRS’ 33 district offices requesting the names and addresses of the revenue agents responsible for the 500 corporate audits in our sample. We also requested the districts to provide us the names and addresses of their group managers who had corporate income tax audits in their inventories as of that date. In addition, we requested from the National Appeals Office the names and addresses of the appeals officers who considered any tax disputes involving any of our sample cases. We initially mailed revenue agent and group manager questionnaires in October 1995. We subsequently sent follow-up questionnaires in November 1995. We initially mailed the appeals officer questionnaires in November 1995 and sent follow-up questionnaires in December 1995. We initially mailed the taxpayer questionnaires in January 1995 with follow-up questionnaires sent in February 1996. Table I.2 shows the response rate and disposition of initial sample selection by type of questionnaire. Questionnaire results for the revenue agent, group manager, and appeals officer questionnaires are presented in appendixes II, III, and IV respectively. Results from the taxpayer questionnaire are not presented in this report nor are they used in the report because of the low response rate to the questionnaire. Because the survey results come from samples, all results are estimates that are subject to sampling errors. We calculated sampling errors for all of the survey results presented in this report. These sampling errors measure the extent to which samples of these sizes and structure can be expected to differ from their total populations. Each of the sample estimates is surrounded by a 95 percent confidence interval. This interval indicates that we are 95-percent confident that the results for the total population fall within this interval. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, in the sources of information that are available to respondents, or in the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in our audit for the purpose of minimizing such nonsampling errors. For example, we carefully pretested the questionnaires and made follow-up mailings to people who did not initially respond. Case ID - 1 (1-7) (8-16) Did the specialist(s) provide you timely specialist(s) affect Exam's you? assistance? taxpayer's agreement on these issues? (1) (2) (3) 36.3% Engineer 31.8% International 11.2% Issue/Industry 10.3% CAS 3.3% Valuation 1.4% Economist 5.5% Other 89.9% Yes 9.1% No 1.0% Don't know 24.8% Very positively 29.0% Positively 32.2% Neither positively nor negatively 7.1% Negatively 5.1% Very negatively 1.8% Don't know 38.3% Engineer 33.5% CAS 10.9% Employee plan 6.7% Issue/Industry 4.4% International 2.5% Economist 3.6% Other 89.0% Yes 11.0% No 0.0% Don't know 25.9% Very positively 22.8% Positively 29.8% Neither positively nor negatively 4.8% Negatively 9.0% Very negatively 7.8% Don't know 27.7% Issue/Industry 24.0% International 15.6% CAS 8.4% Engineer 8.4% Valuation 16.0% Other 84.4% Yes 15.6% No 0.0% Don't know 32.7% Very positively 14.4% Positively 45.7% Neither positively nor negatively 7.2% Negatively 0.0% Very negatively 0.0% Don't know 22. Was the specialist's manager involved in this audit? (CHECK ONE BOX.) (17-19) How satisfied or dissatisfied were you with the specialist(s) manager's involvement, or lack of involvement, on this audit? (CHECK ONE BOX.) 35.1% Neither satisfied nor dissatisfied If you were dissatisfied with the specialist(s) manager's involvement, please explain why. ________________________________________________________________________________________ 11.9% Don't know N=509 VI. YOUR MANAG ER'S INVO LVEMENT IN AUDITS 23. Was your manager involved in the following on the audit shown on page 1 of this questionnaire? In your opinion, how did his/her involvement, or lack of involvement, positively or negatively affect the effectiveness of this audit? (CHECK TWO BOXES IN EACH ROW.) (20-37) How did his/her involvement, or lack of involvement, affect the effectiveness of this audit? this audit? (1) (2) (3) (4) (5) (6) (7) N=1,189 VII. OTH ER AUDIT RESOURCES 24. In your opinion, did you receive adequate resources in the following areas? If not, please indicate to what extent, if at all, the lack of these resources negatively affected your ability to develop all identified issues. (CHECK AT LEAST ONE BOX IN EACH ROW. IF YOU ANSWERED "NO" IN THE FIRST PART, THEN ANSWER THE SECOND PART. IF YOU ANSWER "YES" OR "NOT NEEDED" TO THE FIRST PART, THEN GO TO THE NEXT LINE.) (38-53) If no, to what extent, if at all, did the lack of resource(s) negatively adequate? affect your ability to develop identified issues? (1) (2) (3) (4) (5) (6) (7) (8) (9) INFORMATION REQUESTED F RO M TH E TAXP AY ERS 25. Which of the following did you use in obtaining information from the taxpayer or the taxpayer's representative (i.e., power of attorney)? (CHECK ALL THAT APPLY.) (54-59) Verbally requested information from the taxpayer or the taxpayer's representative Discussed obtaining third-party information with the taxpayer or the taxpayer's representative Discussed a summons with the taxpayer or the taxpayer's representative Issued a summons to the taxpayer or the taxpayer's representative Other (Specify) __________________________________________________________________________________ 26. Regarding the information you requested from the taxpayer or his/her representative for this audit, how satisfied or dissatisfied were you with the following? (CHECK ONE BOX IN EACH ROW. DO NOT CHECK IN THE SHADED AREA.) (60-66) (1) (2) (4) (5) (6) (3) (67-68) If no, did the missing information prevent you from proposing certain adjustments? (CHECK ONE BOX.) 15.0% Don't know 2.1% Don't know N=1,254 IX. CAS E CLOSURE INFORMATION 30. How satisfied or dissatisfied were you with Exam's 28. How satisfied or dissatisfied were you with the length of time it took to complete this audit? (CHECK ONE BOX.) emphasis on attempting to obtain more agreements with taxpayers on proposed adjustments at the lowest level? (CHECK ONE BOX.) (72-73) (69-70) Please explain your dissatisfaction. Please explain your dissatisfaction. 31. In your opinion, how did the overall outcome of this audit for this taxpayer affect their compliance with the tax laws since this audit? (CHECK ONE BOX). (74) 34.5% Taxpayer became more compliant following reasons best describes why this audit closed out after the expected completion date? (CHECK ONE BOX.) (71) 1.8% Taxpayer became less compliant 51.8% Not applicable (the audit was completed in a 42.7% No basis to judge timely manner) 0.5% IRS delays in beginning audit 3.8% IRS staff/specialists not available when needed 2.5% Taxpayer or taxpayer representative not available 17.4% Taxpayer delays in responding to information 8.4% Exam work took longer than anticipated 15.5% Other (Specify)____________________________ N=1,119 _________________________________________ 32. Taking into consideration IRS' corporate audit environment and your district's policies and procedures at the time of this audit, to what extent, if at all, were you able to sufficiently do the following on this audit? (CHECK ONE BOX IN EACH ROW.) (75-83) (1) (2) (3) (4) (5) (6) Identify balance sheet and Schedule M issues b. Probe for unallowable returns in this taxpayer's industry d. Examine corporation's books your position on issues in written reports (e.g., RAR or written response to taxpayer's protest) N=1,238 g. Compute the corporate tax h. Other (Specify) Please comment on factors that you believe adversely affect your ability to do the above items. If the policies and/or procedures have changed, briefly discuss the change(s) and its affect. (ATTACH ADDITIONAL SHEETS IF NECESSARY.) _______________________________________________________________________________________________________ X. TAXP AY ER'S P ROTEST OF ADDITIONAL TAXES RECO MMENDED BY EXAM 33. After this audit closed from Exam, did Appeals consider any disputed tax issues? (CHECK ONE BOX.) (84) Continue with question 34. 1.9% Yes, as a result of a statutory 90-day letter Skip to question 42. 7.1% Don't know 34. Was a written response to the taxpayer's protest provided to Appeals? (CHECK ONE BOX.) (85) 1.5% Not applicable (no protest filed) 35. For any disputed issues from this audit, did the following factor(s) cause the taxpayer to disagree? (CHECK ONE BOX IN EACH ROW. ) (86-91) (1) (2) (3) a. The interpretation of the law b. The facts of the case c. The Appeals settlement on a prior N=393case for this taxpayer d. The Appeals settlement for a e. Other (Specify) f. Other (Specify) 36. In order to consider the relevant facts in this case, did you discuss the disputed issues with Appeals? (CHECK ONE BOX.) (92) 37. Did Appeals ask you about any of the following: (CHECK ONE BOX IN EACH ROW.) (93-98) (1) (2) (3) a. The facts relevant to the disputed issue(s) b. Your legal position on the disputed issue(s) c. Records provided to Appeals by the taxpayer Information in the unagreed report, 90-day letter, or the written response to the protest e. Alternative positions proposed by the taxpayer during f. Other (Specify) Repeat ID - 3 (1-7) feedback help you understand how Appeals resolved the disputed issues? (CHECK AT LEAST ONE BOX IN EACH ROW. IF YOU ANSWER "YES" TO THE FIRST PART, THEN ANSWER THE SECOND PART. IF YOU ANSWER "NO" TO THE FIRST PART, THEN GO TO THE NEXT LINE.) (8-17) If yes, to what extent, if at all, did this feedback on Appeals final resolution help you understand how Appeals resolved the disputed issues? ways? (1) (2) (3) (4) (5) (6) (7) (8) a. Exam contacted Appeals to b. Appeals provided Exam the Appeals Case Memorandum or supporting statement c. Appeals contacted Exam after they resolved the disputed issues d. The taxpayer told Exam N=366 e. Other (Specify) If you did not receive any feedback on Appeals' final resolution of this audit, please skip to question 42 39. Based on Appeals' resolution of disputed issues in this case, to what extent, if at all, were potential issues (a) dropped on your cases in-process or (b) not raised on future audits you were assigned? (CHECK ONE BOX IN EACH ROW.) (1) (2) (3) (4) (5) (6) Potential issues dropped on cases in-process N=367 Issues not raised on future audits N=372 40. In general, in your opinion, to what extent, if at all, does Appeals final resolution of disputed issues cause Exam to alter the way it develops similar issues on future audits of either the same taxpayer or different taxpayers? (CHECK ONE BOX IN EACH ROW.) (1) (2) (3) (4) (5) (6) a. Similar issues on future audits for the same taxpayer b. Similar issues for different 41. Taking everything into consideration, what is your opinion on the quality of Appeals' overall resolution of disputed issues on these corporate income tax returns? (CHECK ONE BOX.) If you believe the quality of Appeals' resolutions are poor or very poor, please explain your response. ______________________________________________________________________________________________________ VII. G ENERAL QUESTIONS AND CO MMENTS 42. In your opinion, how positively or negatively do each of the following factors affect the amount of additional taxes recommended by revenue agents that are ultimately assessed? (CHECK ONE BOX IN EACH ROW.) (1) (2) (3) (4) (5) (6) The revenue agent's workload The revenue agent's group/case manager's workload The revenue agent's skills and knowledge The complexity of the tax laws e. Appeals resolution of disputed issues from a prior audit of this taxpayer Appeals resolution of disputed issues from a different taxpayer g. Other (Specify) In your opinion, to what extent, if at all, do audits of large corporations unreasonably burden those taxpayers selected for audit? (CHECK ONE BOX.) 0.3% To a very great extent 2.5% To a great extent 27.1% To a moderate extent 34.9% To some extent 35.2% To a little or no extent N=1,255 44. Please use the space below to provide any additional comments about this case or IRS' audit and appeals processes for these large corporate taxpayers. You may attach additional sheets if necessary. (32) Thank you for your assistance. Please return the questionnaire in the pre-addressed envelope. (1-7) Please provide us your current work telephone number to assist us if we need to clarify a response: Do you currently have corporate income tax returns (activity codes 219 to 225) in your inventory? 1. Please continue with the questions. You have been selected to complete this questionnaire due to your involvement with these audits of large corporations. Your response to this questionnaire will help us to identify the factors that affect these audits, both positively and negatively. We cannot develop meaningful information without your frank and honest answers to the questions. 2. STOP: Do not continue if you do not currently have these corporate tax returns in your inventory. Please return the questionnaire in the enclosed envelope. GAO will safeguard the privacy of your responses to this questionnaire. They will be combined with those of other respondents and will be reported only in summary form. The control number is included only to aid us in our follow-up efforts. 1. Please answer the following as it applies to you : (ENTER "00" IF NONE OR UNDER 6 MONTHS.) (8-17) This questionnaire should take about 45 minutes to complete. If you have any questions concerning any part of this survey, please call Mr. Kirk Boyer at (913) 384-7570. in the Examination Division . . . . . . . . . . 20.8 Years N=506 b. Number of years auditing corporations with assets of $10 million or more (activity codes 219 to 225) that are not in CEP . . . . . . . . . . . . . . . . . . . . . . U.S. General Accounting Office Kansas City Regional Office Attn: Mr. Kirk Boyer 5799 Broadmoor - Suite 600 Mission, Kansas 66202 d. Number of years auditing CEP . . . . . . . . . . . . . . . . . . . . . Thank you for your assistance. . . . 1.5 Years N=500 2. Which of the following best describes the type of group 3. How many corporate income tax returns (activity codes you currently manage? (CHECK ONE BOX.) (18) 219 to 225) do you currently have in your inventory? (ENTER NUMBER.) (19-21) mixture of revenue agent grades and/or office auditors) Specialized General Program group targeted toward large corporate audits 13.8% Other (Specify) ___________________________ F ACTORS RELATED TO AUDITS OF LARG E CORPORATIONS 4. To what extent, if at all, are you currently involved in the following on audits of large corporations? IN EACH ROW.) (22-33) (1) (2) (3) (4) (5) (6) N=503 5. Overall, how satisfied or dissatisfied are you with the following factors related to large corporate audits? (CHECK ONE BOX IN EACH ROW.) (34-44) (1) (2) (3) (4) (5) a. Length of these audits b. Thoroughness of these audits c. Any out-of-district audit work or d. Extent to which other IRS staff (including specialists) adequately developed the issues e. Timeliness of taxpayers' responses to f. Cooperation of taxpayers to provide g. Cooperation of taxpayers' representatives to provide information N=506 h. Overall level of cooperation of i. Overall level of cooperation of k. Corporate taxpayers' compliance with N=503the tax laws l. Other (Specify) RESOURCES US ED TO AUDIT LARG E CORPORATIONS 6. How often, if at all, do you or your revenue agents do any of the following during these corporate audits? (CHECK ONE BOX IN EACH ROW.) (45-51) 10% of the time (1) (2) (3) (4) (5) a. Contact the industry coordinator or specialist to discuss any issue related to this taxpayers primary industry b. Obtain a position paper on an issue c. Contact the market segment coordinator to discuss d. Obtain or review the market segment audit guide e. Review a District Office memorandum discussing an issue related to this taxpayer's primary industry f. Contact revenue agents or group managers in other N=500districts on specialized industries/issues g. Other (Specify) __________________________________________ 7. Consider the revenue agents you assign to these large corporate audits. How satisfied or dissatisfied are you with their ability to audit large corporations in each of the following areas? (CHECK ONE BOX IN EACH ROW. ) (52-62) (1) (2) (3) (4) (5) b. Probing for unallowable c. Applying the tax laws to e. Examining the corporation's g Determining when to request the services of a specialist h. Determining when to request legal assistance from District Counsel or national office i. Securing taxpayer agreement k. Other (Specify) N=18 8. In your opinion, to what extent, if at all, do the revenue agents conducting these audits receive adequate resources in the following areas? (CHECK ONE BOX IN EACH ROW.) (63-70) (1) (2) (3) (4) (5) (6) Travel funds for local travel Travel funds for out- of- district audit workN=491 Out-of-district assistance or support N=488 audit f. Other (Specify) 9. When revenue agents are assigned corporate income tax returns to audit, do you consider each revenue agents' financial interests to determine if any potential conflicts of interest exist? (CHECK ONE BOX.) (71-72) Please explain how you are made aware of the revenue agents' financial interests. N=485 10. How often, if at all, are any of the following types of assistance used during these large corporate audits? (CHECK ONE BOX IN EACH ROW.) (73-84) 10% of the time (1) (2) (3) (4) (5) b. National office technical Counsel assistance other than technical advice i. Other (Specify) j. Additional revenue agents N=482 while the audit is open If you checked box 4 or 5 (10-39% or Less than 10%) anywhere in the above matrix, please explain your response. ______________________________________________________________________________________________________ 11. In your opinion, does the use of IRS specialists positively or negatively affect Exam's ability to obtain the taxpayer's agreement on large corporate audit issues? (CHECK ONE BOX.) (85) 12. In your opinion, to what extent, if at all, does assistance from each of the following improve the development of issues in these large corporate audits? (CHECK ONE BOX IN EACH ROW.) (86-96) (1) (2) (3) (4) (5) (6) Counsel assistance other than technical advice i. Other (Specify) Appeals while the audit is open N=506 IV. CAS E CLOSURE INFORMATION 13. During the past 12 months, how often, if at all, were potential issues dropped by your revenue agents in these audits of large corporations because these taxpayers did not provide all of the requested information? (CHECK ONE BOX.) (97) 0.0% 90-100% of the time 2.4% 60-89% of the time 0.6% 40-59% of the time 12.0% 10-39% of the time 85.0% Less than 10% of the time 14 How satisfied or dissatisfied are you with Exam's emphasis on attempting to obtain more agreements with the taxpayers on proposed adjustments at the Exam level? (CHECK ONE BOX.) (98-99) Please explain your dissatisfaction: 15. How often, if at all, do these large corporate taxpayers provide Exam a written protest of additional taxes recommended by revenue agents? (CHECK ONE BOX.) (8) 50.0% 90-100% of the time 19.3% 60-89% of the time 12.7% 40-59% of the time 13.9% 10-39% of the time 4.2% Less than 10% of the time N=500 16. How often, if at all, does your district conduct post-audit critiques on these large corporate audits to determine if the audit standards were met? (CHECK ONE BOX.) (9) 8.4% 90-100% of the time 6.5% 60-89% of the time 7.7% 40-59% of the time 21.3% 10-39% of the time 56.1% Less than 10% of the time V. EXAM'S INTERACTION WITH APPEALS 17. In cases where taxpayers provide new information to Appeals, how often, if at all, does Appeals request Exam to review and verify this information? (CHECK ONE BOX.) (10-11) If Appeals returns cases 40% or more of the time, how often, has Exam requested this or similar information from the taxpayers but had not received it? (CHECK ONE BOX.) 13.7% 10-39% of the time 44.7% 90-100% of the time 10.1% Less than 10% of the time -------------------------- 30.3% 60-89% of the time 40-59% of the time 28.6% Do not know 10-39% of the time 2.6% Less than 10% of the time -------------------------- 13.2% Do not know 18. When Exam receives a written protest, how often, if at all, do you or your revenue agents provide a written response to the taxpayer's protest to Appeals in these large corporate cases? (CHECK ONE BOX.) (12) 48.8% 90-100% of the time 19.3% 60-89% of the time 8.4% 40-59% of the time 7.8% 10-39% of the time 15.7% Less than 10% of the time N=500 19. How often, if at all, do you or your revenue agents discuss the following with Appeals? ROW.) (13-18) 10% of the time (1) (2) (3) (4) (5) (6) a. The facts relevant to the b. Legal position cited by the c. Records provided to Appeals N=503by the taxpayer d. Information in the unagreed report (i.e., RAR) or the written response to the protest considered by Appeals to resolve disputed issues f. Other (Specify) __________________ 20. How often, if at all, do you receive the following feedback on Appeals' final resolution of disputed issues from these audits? To what extent, if at all, does this feedback help you or the revenue agent understand how Appeals resolved the disputed issues? (CHECK TWO BOXES IN EACH ROW.) (19-28) To what extent, if at all, did these types of feedback on Appeals final resolution help Exam understand how issues? Appeals resolved the disputed issues? Feedback by... (4) (1) (2) (3) (5) (1) (2) (3) (4) (5) (6) Appeals to obtain the final resolution N=485 16.0% 21.2% 26.3% 27.5% 21.9% 13.7% 14.4% 20.9% 15.8% 7.0% providing Exam the Appeals Case Memorandum or supporting statement N=482 11.4% 22.2% 29.1% officer contacting Exam after they resolved the disputed issues 13.9% 31.6% 40.5% telling Exam of the final resolution N=482 e Other (Specify) 25.0% 25.0% 50.0% 21. In your opinion, does Appeals' resolution of disputed issues for these large corporate audits positively or negatively affect the following: (CHECK ONE BOX IN EACH ROW.) (1) (2) (4) (5) (6) (3) N=491 22. Based on Appeals' resolution of disputed issues, to what extent, if at all, are potential issues dropped on cases in-process or not raised on future audits? (CHECK ONE BOX IN EACH ROW.) (32-33) (1) (2) (3) (4) (5) (6) a. Potential issues dropped Issues not raised on future audits 23. In your opinion, to what extent, if at all, does Appeals' final resolution of disputed issues cause Exam to alter the way it develops similar issues on future audits of either the same taxpayer or different taxpayers? (CHECK ONE BOX IN EACH ROW.) (34-35) (1) (2) (3) (4) (5) (6) a. Similar issues on future audits for the same taxpayer b. Similar issues for different 24. In your opinion, to what extent, if at all, was Appeals' consideration of this case fair and impartial to both the government and the taxpayer? (CHECK ONE BOX IN EACH ROW.) (1) (2) (3) (4) (5) (6) N=476 25. Taking everything into consideration, what is your opinion on the quality of Appeals' overall resolution of disputed issues on these corporate income tax returns? (CHECK ONE BOX.) (38-39) G ENERAL QUESTIONS AND ANY DISTRICT OFFICE CHANG ES 26. In your opinion, how positively or negatively do each of the following factors affect the amount of those additional taxes recommended by revenue agents that are ultimately assessed? (CHECK ONE BOX IN EACH ROW.) (40-46) (1) (2) (3) (4) (5) (6) b. Revenue agent's workload Revenue agent's skills and knowledge d. Complexity of the tax laws e. Appeals resolution of disputed issues from a prior audit of this taxpayer Appeals resolution of disputed issues from a different taxpayer g. Other (Specify) ________________________ In your opinion, to what extent, if at all, do audits of large corporations unreasonably burden those taxpayers selected for audit? (CHECK ONE BOX.) (47) 28. Have you or your district modified any audit procedure for these large corporate cases due to IRS national office's task force teams reviewing corporate workload identification and/or compliance strategies? (CHECK ONE BOX.) (48) N=506 29. Please indicate if your district has implemented or plans changes to its policies or procedures for any of the following. If yes, briefly describe each change and the impact you believe these changes will have on the amount of dollars recommended in Exam. (CHECK ONE BOX IN COLUMN 1 IN EACH ROW. IF YOU ANSWER THE FIRST PART "YES" THEN ANSWER THE REMAINING PARTS OF THE QUESTION.) (49-81) Please use this space to describe the change(s). Has your district implemented the following or does it plan to in the future? revenue agents? (Attach additional pages if necessary.) (2) (1) a. Method for selecting (3) Involving specialists in more of these audits 74.2% Positive impact 4.8% No impact 3.2% Negative impact 17.7% Don't know 66.0% Positive impact 5.7% No impact 5.7% Negative impact 22.6% Don't know with corporate taxpayers on proposed audit adjustments Increasing management involvement 80.6% Positive impact 11.1% No impact 0.0% Negative impact 8.3% Don't know 72.1% Positive impact 14.0% No impact 2.3% Negative impact 11.6% Don't know to clarify vague and complex tax laws 56.0% Positive impact 12.0% No impact 0.0% Negative impact 32.0% Don't know k. Other (Specify) N=9 30. Please use the space below to provide any additional comments about this case or IRS' audit and appeals processes for these large corporate taxpayers. You may attach additional sheets if necessary. (82) Thank you for your assistance. Please return the questionnaire in the pre-addressed envelop. Case ID - 1 (1-7) GAO will safeguard the privacy of your responses to this questionnaire. They will be combined with those of other respondents and will be reported only in summary form. The control number is included only to aid us in our follow-up efforts. We will not identify specific taxpayer information in our report. This questionnaire should take about 1 hour to complete. If you have any questions concerning any part of this survey, please call Mr. Kirk Boyer at (913) 384-7570. You have been selected to complete this questionnaire due to your involvement with the corporate returns for the tax years indicated at the bottom of this page. Because of your work on this case, your response to this questionnaire will help us to identify the factors that affected these audits, both positively and negatively. We cannot develop meaningful information without your frank and honest answers to the questions. Thank you for your assistance. After completing the questionnaire, please remove the case information sticker before returning your completed questionnaire. Please provide us your current work telephone number to assist us if we need to clarify a response: Were you assigned to resolve the disputes on the corporate tax returns shown on page 1? Please continue with the questions. STOP: Do not continue if you were not involved in resolving the disputes on these corporate tax returns. Please return the questionnaire in the enclosed envelope. 1. Please answer the following as it applied to you at the time you were assigned to the work unit shown on page 1: (ENTER "00" IF NONE OR UNDER 6 MONTHS.) (8-21) a. Total number of years of IRS experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . b. Total number of years of IRS experience in Appeals . . . . . . . . . . . . . . . . . . . . . . . . 1. Number of years as an Appeals Officer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Number of years as an Appeals Officer resolving deficiency disputes over $10 million . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Number of years as a Team Chief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . c. Total number of years as a revenue agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . d. Total number of years in other government or private industry position(s) related to tax/auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (Specify the position(s) you've held under d above) ___________________________________________ 2. What grade level were you at the time you were assigned to this work unit? (ENTER NUMBER.) (22-23) N=623 3. Did you receive the following formal training prior to being assigned the work unit shown on page 1 of this questionnaire? If yes, indicate to what extent, if at all, the training improved your ability to resolve the taxpayer's disputed issues? (CHECK AT LEAST ONE BOX IN EACH ROW. IF YOU ANSWER "YES" TO THE FIRST PART, THEN ANSWER THE SECOND PART OF THE QUESTION. IF YOU ANSWER "NO" TO THE FIRST PART, GO TO THE NEXT LINE.) (24-39) If yes, to what extent, if at all, did it improve your ability to resolve the taxpayer's disputed issues? assigned? (1) (2) (3) (4) (5) (6) a. Advanced corporate training or equivalent of Phase 5 b. Corporate training or complex technical and/or legal issues e. IRS training (3 days or more) related to this taxpayer's primary industry (including industry specialization program (ISP) training) f. Non-IRS training or seminars on any issues related to this taxpayer's primary industry g. Topical training provided by Appeals, Exam, and/or Counsel relevant to this corporate taxpayer h. Other (Specify) 4. Was there any training that you had not received before you were assigned to this corporate work unit that you believe you needed to improve your ability to resolve the taxpayer's disputed issues? (CHECK ONE.) (40-41) Please describe the training needed. Y ou will need the Appeals Case Mem orandum to com plete this section. 5. For the corporate entity shown on page 1 of this questionnaire, how many issues were protested by the taxpayer? (ENTER NUMBER.) (42-43) If yes, please indicate the type(s) of related entities and the tax years associated with each type. (CHECK ONE BOX. IF YOU ANSWER "YES", THEN COMPLETE THE REMAINING PARTS. IF YOU ANSWER "NO", GO TO THE NEXT QUESTION.) (44-62) (e.g., S corp, partnership, individual, etc.) 1. 2. 3. 4. 5. 6. N=560 7. Please provide the following information on the top three dollar adjustments to income or credit protested by the taxpayer for the entity shown on page 1 of this questionnaire. (Largest dollar adjustment) (Third largest dollar adjustment) adjustment) (63) (64) (65) (66) (67) (68) (69) (70) (71) (72-80) (81-89) (90-98) (8-16) (17-25) (26-34) (35-41) (42-48) (49-55) apply.) 137 Other (Specify) 72 Other (Specify) 47 Other (Specify) (56-60) (61-65) (66-70) apply.) Questions 8 through 24 relate specifically to the three top dollar adjustments to income or credit you identified in question 7. 8. Please identify the reason code(s) (from those listed below) which best describe your basis for resolving these issues. (ENTER THE LETTER CORRESPONDING TO THE REASON IN THE APPROPRIATE BOX.) (71-76) In many cases, a single reason will be adequate. However, you may select two codes if necessary, to adequately describe the action taken on these issues. If more than one reason code is selected, please list them in the order of impact on the resolution of these issues. (Highest Impact) (2nd Highest Impact) 30.2% A= Appeals/Counsel fully sustains the issue B= Continuing issue - followed prior cycle settlement C= New facts/evidence obtained and evaluated by Appeals/Counsel D= New facts/evidence obtained and evaluated by Exam 30.8% E= Hazards - Facts/evidence are open to judgement F = Hazards - Conflict between Service position and case law 23.1% G= Hazards - Application or interpretation of law I = Changes in law If you did not fully sustain any of these top three dollar issues, did you document your position in the written summary (i.e., Appeals Case Memorandum)? (CHECK ONE BOX FOR EACH ISSUE.) (77-79) summary? (fully sustained) (fully sustained) (fully sustained) IV.EXAM'S DEVELOPMENT O F TH E TOP THREE DO LLAR ISSUES 10. Were all of the top three dollar issues you listed in question 8 fully developed when the case was transferred to Appeals? (CHECK ONE.) (80) 26.9% No fi Continue with question 11. 11. For the issue(s) that were not fully developed, did you request that Exam further develop the issue(s) before you attempted to resolve the taxpayer's dispute? (CHECK ONE.) (81-82) 49.4% No fi Please explain below and then skip to question 13. If you did not request Exam to further develop the issue(s), please explain why. 12. For the issue(s) you requested Exam to further develop, did they (1) provide you the requested feedback, (2) provide it to you in a timely manner, and (3) did it help you resolve the disputed issues? (CHECK AT LEAST ONE BOX IN EACH ROW. IF YOU ANSWER "YES" TO THE FIRST PART, THEN ANSWER THE REMAINING PARTS. IF YOU ANSWER "NO" TO THE FIRST PART, THEN GO TO THE NEXT ISSUE.) (83-91) If yes, was the feedback feedback you requested? provided to you in a timely manner? issue? (1) (2) (3) (Issue fully developed) (Issue fully developed) (Issue fully developed) 13. Did Exam use an IRS specialist(s) or outside consultant(s) to develop any of these three issues? (CHECK ONE BOX.) (92) Continue with question 14. Skip to question 16. Repeat ID - 3 (1-7) (IDENTIFY THE SPECIALIST OR OUTSIDE CONSULTANT. FOR EACH ONE IDENTIFIED, CHECK TWO BOXES IN EACH ROW.) (8-17) To what extent, if at all, did the use of their services help you related issue. resolve the disputed issues? (1) (2) (3) (4) (5) (6) 62.4% 7.3% 6.5% 6.0% 77.9% Issue #1 13.2% Issue #2 7.5% Issue #3 N=189 65.5% 19.4% 10.5% 4.6% 14.7% Issue #1 56.1% Issue #2 24.5% Issue #3 N=56 45.3% 19.6% 19.6% 15.6% 19.6% Issue #1 29.7% Issue #2 50.7% Issue #3 N=17 15. Did you consult with any of the specialist(s) and/or outside consultant(s) listed above while you were considering the disputed issues? (CHECK ONE BOX.) (18-19) Please describe why you did not consult with them. ______________________________________________________________________________ 16. In your opinion, did Exam need, but not obtain, an IRS specialist or outside consultant to develop any of these three top dollar issues? If yes, please identify the type of IRS specialist(s) or outside consultant(s) that Exam needed, but did not obtain, for each issue. (CHECK ONE BOX IN EACH ROW. IF "YES", IDENTIFY THE ISSUE AND THE NEEDED SPECIALIST OR OUTSIDE CONSULTANT.) (20-28) Identify the type of IRS specialist or outside consultant needed but not obtained. 80.9% Issue #1 4.9% Issue #2 9.3% Issue #3 N=53 30.9% 24.6% 20.4% 14.8% 100% Issue #235.8% 35.8% 28.4% 100% Issue #3 43.2% 28.4% 28.4% 17. For each of the top three dollar issues identified, did Exam obtain technical advice to develop the issue? If yes, please indicate to what extent, if at all, the technical advice helped you resolve the taxpayer's disputed issues. (CHECK AT LEAST ONE BOX IN EACH ROW. IF YOU ANSWER "YES" TO THE FIRST PART, THEN CHECK ONE BOX IN THE SECOND PART. IF YOU ANSWER "NO" OR "DON'T KNOW" TO THE FIRST PART, THEN GO TO THE NEXT LINE.) (29-34) Did Exam obtain technical advice? If yes, to what extent, if at all, did it help you resolve the taxpayer's disputed issues? (1) (2) (3) (4) (5) (6) 89.6% No 5.7% Don't know 96.4% No 0.6% Don't know 97.3% No 0.0% Don't know 18. For those issue(s) that Exam did not obtain technical advice, do you believe Exam should have obtained technical advice from the national office? (CHECK ONE BOX FOR EACH ISSUE.) (35-37) Should Exam have obtained technical advice from the national office? 6.3% Yes 91.2% No 2.5% Not applicable (Technical advice used) 2.6% Yes 94.4% No 2.9% Not applicable (Technical advice used) 2.1% Yes 95.2% No 2.8% Not applicable (Technical advice used) V. ADDITIONAL INFORMATION P ROVIDED BY TH E TAXP AY ER FOR TH E TOP THREE DO LLAR ISSUES 19. For any of the top three dollar issues you identified, did the taxpayer provide additional factual information or documentation to Appeals to support its protest on the issue? (CHECK ONE.) (38) Skip to question 22. 20. For those issues that the taxpayer provided additional factual information, did you request that Exam review or verify the accuracy of the information? (CHECK ONE.) (39-40) Please explain below and then skip to question 22. 21. For those issues that you requested Exam to verify additional information, did Exam (1) provide you the requested feedback, (2) provide the feedback to you in a timely manner, and (3) was the feedback helpful in resolving the taxpayer's disputed issues? (CHECK AT LEAST ONE BOX IN EACH ROW. IF YOU ANSWER "YES" TO THE FIRST PART, THEN ANSWER THE REMAINING PARTS. IF YOU ANSWER "NO" OR "NOT APPLICABLE" TO THE FIRST PART, THEN GO TO THE NEXT LINE.) (41-49) Did this feedback help you in resolving the disputed issue? requested? timely manner? (3) (1) (2) V. APPEALS P ROCESSING OF TH E TOP THREE DO LLAR ISSUES 22. Were any of the top three dollar issues disputed by this taxpayer recurring (i.e., the same issue) from previously audited tax returns? (CHECK ONE BOX IN EACH ROW. IF YOU ANSWER "YES" TO THE FIRST PART, THEN COMPLETE THE SECOND PART. IF YOU ANSWER "NO" OR "DON'T KNOW" TO THE FIRST PART, THEN GO TO THE NEXT LINE.) If yes, please explain what you did, if anything, to resolve the recurring issue(s). (50-55) issue? If yes, what did you do, if anything, to resolve these recurring issues? (1) (2) 86.3% No 8.8% Don't know 81.3% No 14.5% Don't know 86.6% No 12.4% Don't know 23. Did you discuss any of the top three dollar issues (either formally or informally) with District Counsel? If yes, did this discussion positively or negatively affect your ability to resolve the taxpayer's disputed issues? (CHECK AT LEAST ONE BOX IN EACH ROW. IF YOU ANSWER "YES" TO THE FIRST PART, THEN ANSWER THE SECOND QUESTION. IF YOU ANSWER "NO" OR "NOT NECESSARY" TO THE FIRST PART, THEN GO TO THE NEXT LINE.) (56-61) District Counsel? you resolve the taxpayer's disputed (1) issues? (2) 67.0% No 6.0% Not necessary 19.7% Very great extent 28.3% Great extent 26.1% Moderate extent 8.2% Some extent 8.9% Little extent 8.9% No extent 76.9% No 6.2% Not necessary 16.5% Very great extent 31.5% Great extent 30.2% Moderate extent 9.5% Some extent 3.8% Little extent 8.5% No extent 79.5% No 8.9% Not necessary 0.0% Very great extent 29.7% Great extent 14.2% Moderate extent 49.0% Some extent 0.0% Little extent 7.1% No extent 24. Were any of the top three dollar issues referred to Counsel for litigation? (CHECK ONE BOX FOR EACH ISSUE.) (62-64) Please consider the entire case (not just the three issues you identified previously) as you answer the rem aining questions. 25. Did Exam provide you with a written response to the taxpayer's protest? (CHECK ONE.) (65) Continue with question 26. Skip to question 28. 41.7% Don't know N=611 26. To what extent, if at all, did this written response help you resolve the taxpayer's disputed issues? (CHECK ONE.) (66-67) 27. Did you discuss (e.g., telephone calls, meetings, etc.) the written response with Exam? (CHECK ONE.) (68-69) To what extent, if at all, did this discussion help you resolve the taxpayer's disputed issues? (CHECK ONE.) 9.1% Very great extent 28.3% No 28. Were you satisfied or dissatisfied with the level of cooperation between you and Exam? (CHECK ONE.) (70) 29. Did the taxpayer use any of the following specialist(s) or representative(s) to assist them with the resolution of the disputed issues? (CHECK ONE BOX IN EACH ROW.) (71-76) Yes (1) No (2) (3) c. d. f. Other (Specify) 30. Overall, how satisfied or dissatisfied were you with the following concerning the taxpayer? (CHECK ONE BOX IN EACH ROW.) (77-80) (1) (2) (4) (5) (6) (3) N=604 31. Did the taxpayer file any of the following while this case was in Appeals' jurisdiction? If yes, did you refer these to Exam? (CHECK AT LEAST ONE BOX IN EACH ROW. IF YOU ANSWER "YES" TO THE FIRST PART, THEN ANSWER THE SECOND QUESTION. IF YOU ANSWER "NO" TO THE FIRST PART, GO TO THE NEXT LINE.) (81-86) If yes, did you refer these to Exam? Appeals' jurisdiction? (1) (2) a. File a claim for a refund 82.1% Yes 17.9% No b. Raise an affirmative 31.5% Yes 68.5% No c. File a request for a tentative refund (e.g., NOL carryback) 25.8% Yes 74.2% No Continue with question 32 if you checked "Yes" to any box above. If the taxpayer did not file a claim for a refund, raise an affirm ative issue, or file a request for tentative refund, skip to question 34. 32. How much did you increase or decrease the taxable income or credits because of the claim(s), affirmative issue(s), or requests(s) for tentative refund identified in the previous question? (CHECK THE APPROPRIATE INCREASE OR DECREASE BOX AND ENTER THE APPROPRIATE AMOUNT. IF NONE, DO NOT CHECK A BOX AND ENTER "00".) (87-96) Repeat ID - 4 (1-7) (8-17) Skip to question 37. Continue with question 35. 35.To what extent, if at all, did the associate chief participate in resolving these disputed issues by (1) meeting with the taxpayer, and (2) providing you guidance, advice, or other assistance.? (CHECK ONE BOX IN EACH ROW.) (20-22) (6) (1) (2) (3) (4) (5) Please explain your response. 36.To what extent, if at all, did the associate chief's participating in the following ways improve your ability to resolve these disputed returns? (CHECK ONE BOX IN EACH ROW. IF ASSOCIATE CHIEF DID NOT PARTICIPATE, CHECK "NOT APPLICABLE".) (23-25) (1) (2) (3) (4) (5) (6) (7) Please explain your response. ______________________________________________________________________________________________________ 37. How did you inform Exam, if at all, of the final resolution of the disputed issues? (CHECK ALL THAT APPLY.) (26-29) Discussed the final resolution with Exam Sent the Appeals Case Memorandum to Exam Other (Specify) _______________________________________________________________________ No feedback provided to Exam 38. Considering the overall case, did the following positively or negatively affect your ability to resolve the taxpayer's disputed issues for the case shown on page 1 of this questionnaire? (CHECK ONE BOX IN EACH ROW.) (30-35) (1) (2) (3) (4) (5) (6) If dissatisfied, please explain. _______________________________________________________________________ 40. In your opinion, to what extent, if at all, did the Appeals' final resolution of these disputed issues influence the way Appeals will consider (a) recurring issues for this taxpayer, and (b) similar issues for different taxpayers? (CHECK ONE BOX IN EACH ROW.) (38-40) (1) (2) (3) (4) (5) (6) a. Recurring issues for this b. Similar issues for Please explain your response. 41. In your opinion, to what extent, if at all, was Appeals' dispute resolution processing of this case fair and impartial to both the taxpayer and the government? (CHECK ONE BOX IN EACH ROW.) (41-42) (1) (2) (3) (4) (5) (6) N=601 42. Taking everything into consideration, what is your opinion on the quality of Exam's overall development of the issues on this case? (CHECK ONE BOX.) (43-44) If you believe the quality of Exam's issue development is poor or very poor, please explain your response. 43. In your opinion, to what extent, if at all, did the dispute resolution process for large corporations unreasonably burden this taxpayer selected for audit? (CHECK ONE BOX.) (45) 44. Please use the space below to provide any additional comments about this case or IRS' audit and appeals processes for these large corporate taxpayers. You may attach additional sheets if necessary. (46) Thank you for your assistance. Please rem ove the yellow case inform ation sticker from page 1 and return the questionnaire in the pre-addressed return envelope. This appendix describes the various IRS offices we visited and the officials we interviewed. In addition, it discusses the scope of our requests to selected officials for written comments on factors related to large corporate audits. Included in this appendix is table V.1, which shows the offices we visited and officials we interviewed. In addition to the questionnaires, we interviewed numerous IRS National Office, regional, district office, and Appeals officials to obtain their views on the factors that affected the amount of additional taxes recommended by revenue agents that were ultimately assessed. At the National Office we interviewed the Executive Director, Corporate Audits Section; the National Director, Strategic Planning Division; the National Director of Appeals; and selected members of their staffs. At each of IRS’ four regional offices we interviewed the Regional Compliance Chief, Regional Counsel, and the Assistant Regional Director for Appeals. In addition, we visited 10 of IRS’ 33 district offices and 9 appeals offices. Table V.1 shows the district offices and appeals offices we visited and the titles of the individuals we interviewed. Further, we also asked Examination Chiefs in all 33 IRS district offices nationwide and Appeals Chiefs in all 33 appeals offices nationwide to give us their comments on certain factors related to these large corporate audits. We received written responses from 31 of the Examination Chiefs and 30 of the Appeals Chiefs. Their views are incorporated throughout this report where appropriate. Royce L. Baker, Tax Issue Area Coordinator Terry Tillotson, Evaluator-in-Charge Kirk R. Boyer, Senior Evaluator Kathleen J. Squires, Evaluator Bradley L. Terry, Evaluator Thomas N. Bloom, Computer Specialist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Internal Revenue Service's (IRS) program to audit the tax returns of about 45,000 large corporations that are not in the Coordinated Examination Program (CEP), focusing on factors that contributed to the assessment rate and audit results. GAO noted that: (1) IRS invested 25 percent more hours in audits of large corporations during 1994 than it did in 1988, yet it recommended 23 percent less additional tax per hour and doubled the rate at which it closed audits with no tax changes; (2) during this 7-year period, IRS assessed 27 percent of the additional taxes revenue agents recommended; (3) GAO's analysis of questionnaire responses and interviews of officials from across IRS identified at least four factors that had a negative effect on both the audit results and the assessment rate; (4) the complexity and vagueness of the tax code caused legitimate differences in interpretation between IRS and corporations over the correct tax liability; (5) this complexity and vagueness made it difficult for IRS revenue agents to find the necessary evidence to clearly support any additional recommended taxes without investing a lot of audit hours; (6) such recommended taxes were less likely to survive the IRS Office of Appeals process and be assessed; (7) also, complex and vague tax laws increased the tax burden on large corporations by increasing their uncertainty about what actions they had to take to comply with the tax code; (8) the IRS Examination Division and Office of Appeals used different performance measures; (9) this difference in measures resulted in a lower assessment rate; (10) these revenue agents worked alone on complex audits without much assistance from district counsel or their group managers, who tended to be responsible for managing all types of audits; (11) further, audit staff had a limited basis on which to classify and select returns that had the most audit potential; (12) IRS' approach for these large corporate returns gave a great deal of discretion to audit staff, however the staff had little information on previously audited corporations or industry issues to serve as guideposts; (13) all these aspects can contribute to a reduction in the amount of taxes recommended per audit hour and, with the possible exception of the problems in selecting returns, can affect the assessment rate; (14) Appeals usually did not share with Examination information that could be used to educate revenue agents; (15) even if Appeals did share information, revenue agents did not always have time to review the new information due to time pressures to do other audits; (16) although Appeals usually shared the final settlement on disputed issues, Examination management often did not distribute those results to the revenue agents; and (17) such feedback can help agents decide whether and how to audit similar issues in the future with better support of any recommended taxes.
According to the State Department, no country in the world poses a more immediate narcotics threat to the United States than Mexico. Estimates indicate that up to 70 percent of the more than 300 tons of cocaine that entered the United States in 1994 came through Mexico. In March 1996, the State Department reported that Mexico supplied up to 80 percent of the foreign-grown marijuana consumed in the United States and from 20 to 30 percent of the heroin. Furthermore, during the past 3 years, Mexican trafficking organizations operating on both sides of the border have replaced U.S.-based outlaw motorcycle gangs as the predominant methamphetamine manufacturers and traffickers in the United States. The Drug Enforcement Administration (DEA) estimates that up to 80 percent of the methamphetamine available in the United States is either produced in Mexico and transported to the United States or manufactured in the United States by Mexican traffickers. Mexican drug-trafficking organizations have complete control over the production and distribution of methamphetamine. In recent years, drug-trafficking organizations in Mexico have become more powerful, expanding their methamphetamine operations and also their cocaine-related activities. According to DEA, Mexican drug traffickers have used their vast wealth to corrupt police and judicial officials as well as project their influence into the political sector. According to DEA’s Administrator, some Mexican organizations have the potential of becoming as powerful as their Colombian counterparts. Furthermore, proximity to the United States, endemic corruption, and little or no financial regulation have combined to make Mexico a money-laundering haven for the initial placement of drug profits into the world’s financial systems. Drug traffickers use a variety of air, land, and sea conveyances and routes to move cocaine from Colombia to Mexico and then overland through Mexico into the United States. Traditionally, traffickers have relied on twin-engine general aviation aircraft to deliver cocaine shipments that ranged from 800 to 1,000 kilograms. Beginning in 1994, however, some trafficking groups began using larger Boeing 727-type jet aircraft that can fly faster than U.S. and Mexican detection and monitoring aircraft and deliver up to 10 metric tons of cocaine per trip. To date, there have been eight known deliveries using this means of transport. Furthermore, as we recently reported, traffickers in the Caribbean have changed their primary means of delivery and are increasingly using commercial and noncommercial maritime vessels. According to U.S. Embassy officials, about two-thirds of the cocaine currently entering Mexico is transported by maritime means. Mexico has taken some counternarcotics actions. Mexico eradicated substantial amounts of marijuana and opium poppy crops in 1995 with the assistance of up to 11,000 soldiers working on drug eradication programs. According to the Department of State, Mexican personnel effectively eradicated 29,000 acres of marijuana and almost 21,000 acres of opium poppy in 1995. Furthermore, President Zedillo directed the Mexican Air Force to use its F-5 aircraft to assist in air interdiction efforts in 1995. On the other hand, the amount of cocaine seized and the number of drug-related arrests in Mexico have declined from 1993 to 1995 compared to those before U.S. assistance was terminated. For example, the average annual amount of cocaine seized in Mexico between 1990 and 1992 was more than 45 metric tons, including more than 50 tons in 1991. In contrast, from 1993 to 1995, average cocaine seizures declined to about 30 metric tons annually. The number of drug-related arrests declined by nearly two-thirds between 1992 and 1995. Mexico’s efforts to stop the flow of drugs have been limited by numerous problems. First, despite the efforts that President Zedillo has undertaken since late 1994, both State and DEA have reported that corruption in Mexico is still widespread and that pervasive corruption is seriously undermining counternarcotics efforts. Second, serious economic and political problems have limited Mexico’s counternarcotics effectiveness. In December 1994, Mexico experienced a major economic crisis—a devaluation of the peso that eventually resulted in a $20-billion U.S. financial assistance package. In addition, high rates of unemployment and inflation have continued to limit Mexico’s economic recovery. Also, Mexico has had to focus funds and resources on the Chiapas region to suppress an insurgency movement. Third, Mexico has lacked some basic legislative tools needed to combat drug-trafficking organizations, including the use of wiretaps, confidential informants, and a witness protection program. New legislation authorizing these activities recently passed the Mexican Congress and is expected to be enacted following ratification by the Mexican states. Also, until May 1996, the laundering of drug profits was not a criminal offense and Mexico’s laws lacked sufficient penalties to effectively control precursor chemicals that are used to manufacture methamphetamine. To counter the growing threat posed by these chemicals, the United States encouraged Mexico to adopt strict chemical control laws. Fourth, the counternarcotics capabilities of the Mexican government to interdict drug-trafficking activities are hampered by inadequately equipped and poorly maintained aircraft. In addition to equipment problems, some Mexican pilots, mechanics, and technicians are not adequately trained. For example, many F-5 pilots receive only a few hours of proficiency training each month, which is considered inadequate to maintain the skills needed for interdiction. Moreover, assigning the aircraft to interdiction efforts may not have an immediate impact because of deficiencies in the capabilities and maintenance of the F-5s. Between fiscal years 1975 and 1992, Mexico was the largest recipient of U.S. counternarcotics assistance, receiving about $237 million in assistance. In fiscal year 1992, the United States provided about $45 million n assistance that included excess helicopters, aviation maintenance support, military aviation training, and some equipment. In early 1993, the Mexican government assumed responsibility for the cost of all counternarcotics efforts in Mexico. Since then, U.S. aid has declined sharply and, in 1995, amounted to about $2.6 million, mostly for helicopter spare parts and a limited amount of training to Mexican personnel. According to the State Department, U.S. efforts in Mexico are guided by an interagency strategy developed in 1992 that focused on strengthening the political commitment and institutional capability of the Mexican government, targeting major trafficking organizations, and developing operational initiatives such as drug interdiction. A key component of the strategy, developing Mexican institutional capabilities to interdict drugs, was severely hampered when State Department funding was largely eliminated in January 1993. U.S. policy decisions have also affected drug control efforts in the transit zone and Mexico. In November 1993, the President issued Presidential Decision Directive 14, which changed the focus of the U.S. international drug control strategy from interdicting cocaine as it moved through the transit zone of the Caribbean and Mexico to stopping cocaine in the source countries of Bolivia, Colombia, and Peru. To accomplish this, drug interdiction resources were to be reduced in the transit zone, while, at the same time, were to be increased in the source countries. As we reported in April 1996, the Department of Defense (DOD) and other agencies involved in drug interdiction activities in the transit zone began to see major reductions in their drug interdiction resources and capabilities in fiscal year 1993. The amount of U.S. funding for the transit zone declined from about $1 billion in fiscal year 1992 to about $569 million in fiscal year 1995—a decline of 43 percent. Reductions in the size of the counternarcotics program have resulted in corresponding decreases in the staff available to monitor how previously provided U.S. helicopters and other assistance are being used, a requirement of section 505 of the Foreign Assistance Act of 1961, as amended. The Mexican government, however, has objected to direct oversight of U.S.-provided assistance and, in some instances, has refused to accept assistance that was contingent upon signing such an agreement. In other instances, Mexico’s position resulted in lengthy negotiations between the two countries to develop agreements that satisfied the requirements of section 505 and were more sensitive to Mexican concerns about national sovereignty. Prior to the “Mexicanization” policy, the State Department employed several aviation advisers who were stationed at the aviation maintenance center in Guadalajara and the pilot training facility at Acapulco. One of the duties of these advisers was to monitor how U.S. assistance was being used. However, with the advent of the Mexicanization policy in 1993, the number of State Department and contract personnel was greatly reduced and the U.S.-funded aviation maintenance contract was not renewed. As a result, the State Department currently has no personnel in the field to review operational records on how the 30 U.S.-provided helicopters are being used. According to U.S. officials, the U.S. Embassy relies heavily on biweekly reports that the Mexican government submits. Unless they request specific operational records, U.S. personnel have little knowledge of whether helicopters are being properly used for counternarcotics activities. There are also limitations in U.S. interdiction efforts. The 1993 change in the U.S. drug interdiction strategy reduced the detection and monitoring assets in the transit zone. U.S. Embassy officials stated that this reduction created a void in the radar coverage, and some drug-trafficking aircraft are not being detected as they move through the eastern Pacific. DOD officials told us that radar voids have always existed throughout the transit zone and the eastern Pacific area. These voids are attributable to the vastness of the Pacific Ocean and the limited range of ground- and sea-based radars. As a result, DOD officials believe that existing assets must be used in a “smarter” manner, rather than flooding the area with expensive vessels and ground-based radars, which are not currently available. In Mexico, U.S. assistance and DEA activities have focused primarily on interdicting aircraft as they deliver their illicit drug cargoes. However, as previously mentioned, traffickers are increasingly relying on maritime vessels for shipping drugs. Commercial smuggling primarily involves moving drugs in containerized cargo ships. Noncommercial smuggling methods primarily involved “mother ships” that depart Colombia and rendezvous with either fishing vessels or smaller craft, as well as “go-fast” boats that depart Colombia and go directly to Mexico’s Yucatan Peninsula. Efforts to address the maritime movements of drugs into Mexico are minimal, when compared with the increasing prevalence of this trafficking mode. State Department officials believe that Mexican maritime interdiction efforts would benefit from training offered by the U.S. Customs Service and the U.S. Coast Guard in port inspections and vessel-boarding practices. Since our June 1995 testimony, a number of events have occurred that could affect future drug control efforts by the United States and Mexico. Specifically: The U.S. Embassy elevated counternarcotics from the fourth highest priority—its 1995 ranking—in its Mission Program Plan to its co-first priority, which is shared with the promotion of U.S. business and trade. In July 1995, the Embassy also developed a detailed embassywide counternarcotics plan for U.S. efforts in Mexico. The plan involves the activities of all agencies involved in counternarcotics activities at the Embassy, focusing on four established goals, programs that the Embassy believes will meet these goals, and specific milestones and measurable objectives. It also sets forth funding levels and milestones for measuring progress. The Embassy estimated that it will require $5 million in State Department funds to implement this plan during fiscal year 1996. However, only $1.2 million will be available, according to State Department personnel. After taking office in December 1994, President Zedillo declared drug trafficking “Mexico’s number one security threat.” As such, he advocated legislative changes to combat drugs and drug-related crimes. During the most recently completed session, the Mexican Congress enacted legislation that could improve some of Mexico’s counternarcotics capabilities such as making money laundering a criminal offense. However, legislation to provide Mexican law enforcement agencies with some essential tools needed to arrest and prosecute drug traffickers and money launderers requires ratification by the Mexican states. These tools include the use of electronic surveillance and other modern investigative techniques that, according to U.S. officials, are very helpful in attacking sophisticated criminal organizations. Furthermore, to date, the Mexican Congress has not addressed several other key issues, such as a requirement that all financial institutions report large cash transactions through currency transaction reports. In March 1996, Presidents Clinton and Zedillo established a high-level contact group to better address the threat narcotics poses to both countries. The Director of the Office of National Drug Control Policy cochaired the first contact group meeting in late March, which met to review drug control policies, enhance cooperation, develop new strategies, and begin to develop a new plan for action. Binational working groups have been formed to plan and coordinate implementation of the contact group’s initiatives. According to officials from the Office of National Drug Control Policy, a joint antinarcotics strategy is expected to be completed in late 1996. In April 1996 the United States and Mexico signed an agreement that will facilitate the transfer of military equipment and, shortly thereafter, the United States announced its intention to transfer a number of helicopters and spare parts to the Mexican government. Twenty UH-1H helicopters are scheduled to be transferred in fiscal year 1996 and up to 53 in fiscal year 1997. State Department personnel stated that the details about how the pilots will be trained, as well as how the helicopters will be operated, used, and maintained, are being worked out. It is too early to tell whether these critical efforts will be implemented in such a way as to substantially enhance counternarcotics efforts in Mexico. This concludes my prepared remarks. I would be happy to respond to any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed counternarcotics activities in Mexico, focusing on: (1) the nature of the drug-trafficking threat from Mexico; (2) Mexican government efforts to counter drug-trafficking activities; and (3) recent initiatives by the United States and Mexico to increase counternarcotics activities. GAO noted that: (1) U.S. and Mexican drug interdiction efforts have had little, if any, impact on the flow of illegal drugs from Mexico to the United States; (2) the amount of cocaine seized and the number of drug-related arrests have significantly declined since 1992; (3) widespread corruption, economic difficulties, and inadequate equipment and personnel training have hampered Mexico's capabilities to detect and interdict drug traffickers; (4) a substantial amount of Mexico's resources have been focused on economic concerns; (5) U.S. counternarcotics assistance has declined by 43 percent since 1992; (6) U.S. policy decisions and reductions in the counternarcotics program have also affected Mexican and U.S. drug control efforts; (7) Mexico lacks some important legislative tools for curbing drug-related activities; (8) drug interdiction funding declined from $1 billion in fiscal year (FY) 1992 to about $570 million in FY 1995; and (9) although staffing cutbacks have limited U.S. ability to monitor counternarcotics assistance to Mexico, the United States and Mexico have created a framework for increased cooperation and are developing a new binational drug control strategy.
In recent years, agencies have increasingly placed orders against existing contracts that have been awarded by another agency to save time and administrative effort. Rather than going through the often lengthy process involved in awarding a new contract for services—soliciting offers, evaluating proposals, and awarding the contract—agencies can place task orders against established indefinite quantity contracts that meet their needs. When placing orders against multiple-award task order contracts, agencies are generally required to ensure that contractors have a fair opportunity to be considered for each order with certain exceptions (such as urgency or logical follow-on). For GSA Schedule contracts, agencies are required to follow ordering procedures such as reviewing prices from at least three contractors, evaluating prices for services requiring a statement of work, and seeking price reductions for large orders. Interagency contracting is often handled by entrepreneurial, fee-for- service organizations, where agency contracting units operate like a business and provide contracting assistance to other agencies for a fee. The Interior contracting office that placed the orders for interrogation and other services—the Southwest Branch of Interior’s National Business Center, located in Fort Huachuca, Arizona—is one such organization. This office’s contracting activity, primarily on behalf of other agencies, has increased substantially over the past 3 years, with reported obligations increasing from $609 million in fiscal year 2002 to $1.02 billion in fiscal year 2004. The fee-for-service procurement process generally involves three parties: the agency requiring a good or service; the agency placing the order or awarding the contract; and contractors providing the goods and services the government needs. The requiring agency officials determine the goods or services needed and, if applicable, prepare a statement of work, sometimes with the assistance of the ordering organization. The contracting officer at the ordering office ensures that the contract or order is properly awarded or issued (including any required competition), and administered under applicable regulations and agency requirements. If contract performance will be ongoing, a contracting officer’s representative—generally an official at the requiring agency with relevant technical expertise—is normally designated by the contracting officer to monitor the contractor’s performance and serve as the liaison between the contracting officer and the contractor. At the same time as use of interagency contracting has increased, DOD has also increased its use of contractors in military operations. Particularly since the 1991 Gulf War, contractors have taken over support positions that were traditionally filled by government personnel. For example, a company that CACI later acquired began providing intelligence support to the Army in Germany in 1999. When the Army in Europe deployed intelligence personnel to the Iraq theater in 2003, CACI employees went with them. Following the announcement of the end of major combat in May 2003, the Army, as part of the Coalition Joint Task Force Seven (CJTF-7), was expecting a non-hostile situation and did not plan for an insurgency. It was unprepared for the volume of Iraqi detainees and the need for interrogation and other intelligence and logistics services. An Army investigative report from August 2004 noted that the CJTF-7 headquarters in Iraq lacked adequate personnel and equipment and that the military intelligence units at Abu Ghraib were severely under- resourced. The out-of-scope orders for interrogation and other services issued by Interior have been terminated. However, the Army has continued contracting for intelligence functions and logistics services through bridge contracts awarded on a sole source basis to CACI. The original term of the contracts was 4 months, and the Army subsequently exercised options for an additional 2 months, through early 2005. According to an Army official, the contract terms were recently extended further to allow the Army adequate time to competitively award contracts for these services. Recently, in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, Congress took steps to ensure the proper use of interagency contracts by DOD, the largest customer for these types of contract arrangements. The act prohibits DOD from procuring goods and services above the simplified acquisition threshold (generally $100,000) through a contract entered into by an agency outside DOD, unless the procurement is done in accordance with procedures prescribed by DOD for reviewing and approving the use of such contracts. The conference report accompanying the legislation established expectations that DOD’s procedures will ensure that any fees paid by DOD to the contracting agency are reasonable in relation to work actually performed; the supplies or services are consistent with the appropriated funds being used; the goods and services procured are within the scope of the non-DOD contract vehicle; and such orders are in compliance with all applicable DOD-unique statutes, regulations, directives, and other requirements prior to approval. Further, the act required reviews of certain non-DOD contracting offices to determine if they are compliant with Defense procurement requirements. If an office is deemed non-compliant, DOD could be prohibited from ordering, purchasing or otherwise procuring property or services in excess of $100,000 through that contracting office. In addition, a recent change to the Federal Acquisition Regulation (FAR), effective July 2004, added language to make it clear that a contracting officer placing an order against a GSA Schedule on another agency’s behalf is responsible for applying that agency’s regulatory and statutory requirements. The process of procuring interrogation and other services for DOD broke down at numerous points. In general, breakdowns in the procurement process, such as not following competition requirements and not properly justifying the decision to use interagency contracting, occurred when the orders were issued. The process also broke down during the administration of the contract, as the contractor’s performance was not adequately monitored. Because the officials at Interior and the Army responsible for the orders did not fully carry out their roles and responsibilities, the contractor was allowed to play a role in the procurement process normally performed by the government. This situation increased the risk that the government would not get the services it needed at reasonable prices and in compliance with competition and other contracting requirements. Orders issued outside the scope of the underlying contract do not satisfy legal requirements under the Competition in Contracting Act for competing the award of government contracts. In such cases, the out-of- scope work should have been awarded using competitive procedures or supported with a justification and approval for other than full and open competition. The Interior IG and GSA have determined that 10 of the 11 task orders issued by Interior to CACI for interrogation and other services in Iraq were outside the scope of the underlying GSA information technology contract. The Army has also determined that interrogation services were outside the scope of the contract. The labor category descriptions in the GSA contract were, in most cases, significantly different from the descriptions on DOD’s statements of work and do not accurately represent the work that the contractor performed. Table 1 demonstrates some of the disparities between the labor categories in DOD’s statements of work and the information technology contract. CACI representatives stated that they determined the salary and benefits the company would pay interrogators and screeners and then selected the GSA information technology contract labor categories that would sufficiently cover the company’s employee salary and benefits expenses, overhead, and profit. In other words, CACI selected the labor categories in the contract for cost and pricing purposes, rather than as a reflection of the work to be performed. Army representatives in Iraq told us that the services on the orders for interrogators, screeners, and logistics support were not information technology services. The Interior contracting officer also had concerns about whether the orders were within scope, asking the contractor for a verbal and, later, written explanation as to how the labor categories in the contract were related to the services the company was to provide in Iraq. The contracting officer neglected to follow a requirement for legal review that could have raised questions about whether the orders were within scope. A July 2001 Interior policy requires legal review for all proposed solicitations in excess of $500,000 for non-commercial items and $2 million for commercial items. Interior contracting officials stated that they did not believe this requirement extended to orders placed on GSA contracts. Representatives from Interior’s offices of general counsel and acquisition policy, however, told us that orders placed on GSA contracts are subject to legal review and that the orders for interrogation and other services should have been reviewed. Further, the Interior contracting officer did not perform the required evaluation of the contractor’s proposed approach for addressing DOD’s requirements. Normally, when ordering services from GSA Schedules that require a statement of work, the ordering office is responsible for evaluating the contractor’s level of effort and mix of labor proposed to perform the specific tasks being ordered and for making a determination that the price is reasonable. In this situation, however, the Interior contracting officer did not evaluate the mix of labor categories or establish that the level of effort was reasonable. Although documents in Interior’s contract files provided that “technical review does not take exception to the proposal,” no documentation exists to support the statement that an evaluation was performed and, in fact, Interior contracting officials told us that no such review was done. In addition to violating competition rules by placing orders that were not in the scope of the underlying contract, Interior contracting officials also did not comply with requirements contained in section 803 of the National Defense Authorization Act for Fiscal Year 2002 relating to DOD’s purchase of services from GSA Schedule contracts. Specifically, for DOD orders for services over $100,000 placed on GSA contracts, notice must be provided to all GSA Schedule contractors offering the required services or to as many contractors as practicable to ensure that offers will be received from at least three contractors. If three offers are not received, a written determination must be made that no additional contractors could be identified despite reasonable attempts to do so. The requirements that DOD orders be placed on a competitive basis can be waived in writing for certain circumstances such as urgency. Section 803 requirements applied to the Iraq orders, even though Interior was the contracting agency, because DOD regulations require application of section 803 provisions to orders placed by non-DOD agencies on behalf of DOD. The Interior contracting office, however, placed the orders directly with CACI without notifying other prospective contractors. Interior did not make any written determination that no additional contractors could be identified or that the competition requirement should be waived in this case. In contracting through Interior, the Army did not follow requirements to justify use of interagency contracts. According to procurement regulations, an Economy Act determination and findings should have been approved by an Army contracting officer or another designated official to justify the use of Interior to acquire the services for the Army. The Economy Act authorizes agencies to enter into mutual agreements to obtain supplies or services by interagency acquisition. The FAR mandates that the requiring activity document that an Economy Act order is in the agency’s best interest and that it cannot obtain the goods and services as conveniently or economically by contracting directly with a private source. However, Army personnel did not prepare the determination and findings, as required. Interior placed the orders with CACI by using a blanket purchase agreement (BPA) established under the GSA Information Technology Schedule contract in 1998. BPAs, a simplified method of filling anticipated repetitive needs for supplies and services, allow agencies to establish “charge accounts” with qualified vendors. The BPA in this case was improperly established and improperly used. Interior’s contracting office did not comply with required BPA procedures meant to ensure the government receives the best value for its dollars and that competition is encouraged. Under procedures referred to in the Schedule contract, ordering offices that establish a single BPA are required to select a contractor that represents the best value and results in lowest overall cost, and to inform other contractors of the basis for the selection. We found no evidence, either in the BPA files or in our discussions with Interior contracting staff, that these requirements were followed, even though documents in the contract files state that the BPA is “best value.” In essence, the BPA was used to direct business to the company on a sole source basis. Contracting officials also failed to seek discounts from CACI’s established GSA contract prices, as required. Applicable procedures in the contract stipulate, for example, that discounts are to be sought when orders exceed $500,000. We found that no discounts were sought, even though the value of the orders for work in Iraq ranged from $953,000 to $21.8 million. In addition, the procedures in the GSA contract require that BPAs are reviewed annually to ensure the government continues to receive best value. These annual reviews were never conducted. Further, the BPA was improper because it did not contain defined requirements, as stipulated in the GSA contract. Rather, the BPA states that “the categories of service provided by this BPA may include but are not limited to” various classes of information technology services. Finally, in 2001, Interior added several items and services to the BPA. This action improperly expanded the scope of services contained in the underlying GSA contract. According to GSA guidance, such scope expansions are a potential violation of the Competition in Contracting Act. When we asked Interior’s contracting officials—including the contracting officer who signed the BPA—about these additions, they were unable to explain how or why the additions had been made. One of the contracting officer’s key responsibilities is ensuring that the government monitors the contractor’s performance. The contracting officer may assign this responsibility to a contracting officer’s representative (COR). At Interior, the contracting officer is required to verify that the COR has the appropriate training and to issue a designation letter to the COR outlining the duties to be performed. These duties can include verifying that the contractor performs the technical requirements of the contract in accordance with the contract terms, conditions, and specifications; monitoring the contractor’s performance, notifying the contractor of deficiencies observed during surveillance, and directing appropriate action to effect correction; and reporting to the contracting officer in a monthly report the performance of services rendered under the contract. We found that Interior’s contracting officials never verified that the Army personnel serving as CORs had the appropriate training and, with one exception, sent the COR designation letter either months after the fact or not at all. Interior officials, including the contracting officer who placed the orders for DOD, had no explanation for why contractor surveillance policies were not followed. Moreover, the contracting officer had little to no communication with the CORs in Iraq and did not follow up to obtain monthly reports from them on the contractor’s performance. Proper surveillance of the contractor’s performance under the orders was especially critical because the work was done on a time and materials basis, where services are billed on the basis of direct labor hours at specified fixed hourly rates (which includes wages, overhead, general and administrative expenses, and profit). According to the FAR, time and materials contracts require appropriate government oversight because there is no incentive for the contractor to control costs or be efficient. This requirement was recently reiterated in a September 2004 memo from DOD’s Director, Defense Procurement and Acquisition Policy, which states that, because labor hour and time and materials contracts usually require significant surveillance to ensure the government receives good value, CORs should be appointed to verify the appropriateness of labor categories and the reasonableness of the number of hours worked. In Iraq, the Army CORs responsible for the orders for interrogation and other services performed limited surveillance of the contractor’s performance. Contractor employees were stationed in various locations around Iraq, with no COR or assigned representative on site to monitor their work. One contractor interrogator who had been located at the Abu Ghraib prison told us that, although he interacted with military personnel at the prison, he had no interaction with the COR. Further, although the COR in Baghdad stated that he relied on other military personnel on site to report back to him, a recent Army investigative report showed that the military personnel on site were not given guidance on how to oversee the contractors. In fact, one of the military interrogators at Abu Ghraib prison indicated that the primary point of contact for the contractors was the contractor’s on-site manager, with no mention of the COR. The Army investigative report pointed to this lack of contractor surveillance at the Abu Ghraib prison as a contributing factor to the environment in which the prisoner abuse occurred. The report noted that it is very difficult, if not impossible, to effectively administer a contract when the COR is not on site and that the Army needs to improve its oversight of contractors’ performance to ensure that the Army’s interests are protected. In procuring the interrogation and other services in Iraq, Interior and Army officials abdicated their contracting responsibilities to a large degree. In this void, the contractor played a significant role in developing, issuing, and administering the orders, including identifying the contractor’s BPA with Interior as the contract vehicle to provide the services; drafting statements of work; suggesting that Army officials use the company’s rough order of magnitude price as the government cost estimate; acting as a conduit for information from the Army in Iraq to the Interior providing the Interior contracting office with a draft justification and approval to award additional work to the company on a sole source basis; sending invoices directly for payment; and requesting that construction work be performed under the BPA, which would have also been out of scope from the GSA Schedule contract, although subsequent discussions between CACI and Interior contracting officials resulted in the work being awarded separately on a sole source basis due to urgency. By acting in this manner, the contractor effectively replaced government decision-makers in several aspects of the procurement process. For example, a contractor employee proposed the initial requirements package for human intelligence, which included interrogators, and provided information to the Army personnel regarding skill sets needed for positions. Contractor employees also identified the company’s BPA with Interior as the contract vehicle to provide the services. Contractor officials acknowledge they helped to draft statements of work, with contractor employees in Iraq sending the statements of work to company headquarters in the United States for suggestions. In fact, one of the statements of work we found in official contract files was on the contractor’s letterhead. We also found that contractor employees wrote a draft justification and approval for Interior to award additional work noncompetitively to the company. Such a level of participation by the contractor creates a conflict of interest and undermines the integrity of the competitive contracting process. Contractor officials explained that they marketed their services directly to Army intelligence and logistics officials in Iraq because of relationships they had developed over time. According to contractor officials, Army officials told them to work directly with the Interior contracting office because the DOD contingency contracting office in Iraq was focused on obtaining other necessary services. They also told us that, because military communication channels were not adequate, they communicated directly with the Interior contracting office. Interior contracting officials went along with this arrangement, citing problems in reaching Army officials in Iraq. The contract files contain emails between the contractor and Interior contracting officials on matters such as funding requests, statements of work, and COR assignments. Further, a COR responsible for the logistics orders told us that contractor officials informed him that Interior had merged two task orders; he was unaware that this had occurred. According to contractor officials, because Army and Interior officials allowed contractor personnel to act as the go-between, the contractor sent its invoices directly to Interior for payment after the COR signed them, as opposed to the normal practice of having government personnel perform this task. Although use of streamlined contracting vehicles can be beneficial, they must be effectively managed to ensure compliance with the FAR and to protect the government’s interests. When a requiring agency’s contracting needs are being handled by an outside agency, effective management controls become even more critical due to the more complex environment involved. Management controls, synonymous with internal controls, are an integral component of an organization’s management that provide reasonable assurance that operations are effective and efficient and that employees comply with applicable laws and regulations. Two controls include management oversight and training. When these controls are not in place, particularly in a fee-for-service environment, more emphasis can be placed on customer satisfaction and revenue generation than on compliance with sound contracting policy and required procedures. We found an absence of these management controls for the 11 orders that were issued and administered for interrogation and other services. Significant problems in the way Interior’s contracting office carried out its responsibilities in issuing these orders were not detected or addressed by management. Further, managers at this office told us that they intentionally created an office culture of providing inexperienced staff with the opportunity to learn contracting by taking on significant responsibilities. More experienced contracting officers were responsible for overseeing and reviewing less experienced and trained purchasing agents and contract specialists. However, some staff told us that the contracting officers’ reviews were not always thorough and appeared to be a “rubber stamp.” Further, some staff indicated discomfort at the level of responsibility given to less experienced personnel and believed oversight of the activities of these employees was inadequate. Moreover, Interior’s headquarters did not exercise thorough oversight of the contracting activity that placed the orders. An April 2003 Interior Acquisition Management Review concluded that the contracting office was highly effective, despite the fact that the review identified a number of problems where contracting personnel did not comply with sound contracting practices. Nonetheless, an Interior headquarters official told us that the contracting office did not require extensive oversight, based in part on the determination that the office was highly effective. The review cited the following: A conscious decision was made not to comply with Interior’s requirements for legal review because the office believed the reviews took too long. A general weakness in cost support was noted. For instance, “best value” analysis was cited in sole source awards. Also, the contracting office accepted contractors’ proposed prices without analyzing the cost and pricing data in depth to ensure that the prices were fair and reasonable. Further, the contractor’s proposed cost and the government’s cost estimate were identical without any explanation. Labor rates included in contracts and orders were not adequately justified. Competition requirements were not followed when placing orders using BPAs. The review’s conclusion that the office was highly effective was based in part on the office’s peer review process, where contracting actions were reviewed by a second person as a management control. However, the review found no consistent methodology or format for the peer reviews and little or no information on results. Rather, the process for conducting and reporting the results of the reviews varied from individual to individual. Based on our interviews with Interior employees, we found that the peer reviews were often conducted by personnel with little contracting experience and training. Adequate management oversight is particularly critical to ensuring that interagency fee-for-service contracting organizations, such as the Interior contracting office, comply with procurement regulations. The fee-for- service arrangement creates an incentive to increase sales volume in order to support other programs of the agency that awards and administers an interagency contract. This may lead to an inordinate focus on meeting customer demands at the expense of complying with required ordering procedures. The managers at Interior’s contracting office promote a business-like entrepreneurial philosophy modeled after the private sector and empower employees to market services, interact with contractors, and make decisions in support of acquisitions. We found examples where the Interior contracting office marketed its BPA with CACI to federal agencies as a way to obtain services quickly without competition. Further, the performance measures for individual employees at Interior’s contracting office, which measure quality, teamwork, and customer service, specifically state that customer satisfaction is a high priority in achieving good customer service. In fact, Interior’s Acquisition Management Review of the contracting office focused heavily on customer satisfaction as a performance metric. Several of the office’s customers were interviewed, and their compliments were summarized in detail as a key section of the review. The Army also lacked management oversight of the contracting activity for interrogation and other services. This lack of oversight is evidenced by some questions that were raised by the Army’s Chief of Contracts in Iraq in February 2004, about 6 months after the initial orders were placed. The Chief of Contracts asked the Interior contracting office whether the orders were against a GSA contract and what the contract what labor rates were included in the contract, whether there was a performance description for contractor personnel whether all contractor employees in Iraq were in accordance with the who had been keeping track of the labor hours the contractor billed to whether Interior had received monthly status reports on the contractor’s performance, and whether an Economy Act determination and findings had been prepared. Further, DOD is required to have a management structure in place for the procurement of services that provides for a designated official in each military department and defense agency to exercise responsibility for the management of the procurement of services by that department or agency. This management structure is to include a means by which employees of the departments and defense agencies are accountable to the designated officials for carrying out certain requirements. These requirements include ensuring that services are procured by means of contracts or task orders that are in the best interest of DOD and are entered into or issued and managed in compliance with applicable statutes, regulations, directives, and other requirements, regardless of whether the services are procured by DOD directly or through a non-DOD contract or task order. These requirements also include approving, in advance, any procurement of services above certain thresholds that is to be made through the use of a contract entered into, or a task order issued, by a government official outside DOD. Notwithstanding the requirement for this management structure, it is clear that DOD’s implementation did not ensure that these requirements were met in procuring the interrogation and other services through Interior. Interior’s contracting office personnel and Army personnel in Iraq that were responsible for the orders for interrogation and other services lacked adequate training on their contracting responsibilities. While a warranted contracting officer at Interior signed the orders, certain requirements were not understood or followed, such as the need for legal review and competition. Further, an inexperienced purchasing agent administered the BPA on a day-to-day basis, including preparing various contracting documents. The employee had taken only one basic contracting course, even though the contracting office’s training requirements require purchasing agents to take three contracting courses. Moreover, one staff member who had not taken the required training for a purchasing agent position was promoted to a contract specialist position. Several contracting employees we spoke with were concerned about the frequency and consistency of training they had received. We found that employees took training on their own initiative and that the training was not monitored or enforced by managers. Army personnel in Iraq responsible for overseeing CACI employees’ performance in Iraq were not adequately trained to properly exercise their responsibilities. An Army investigative report concluded that the lack of training for the CORs assigned to monitor contractor performance at Abu Ghraib prison, as well as an inadequate number of assigned CORs, put the Army at risk of being unable to control poor performance or become aware of possible misconduct by contractor personnel. We found that the personnel acting as CORs did not, for the most part, have the requisite training and were unaware of the scope of their duties and responsibilities. For example, they did not know that they were required to monitor and verify the hours worked by the contractor and instead just signed off on the invoices provided by the contractor. During the course of our work, we found confusion about whether the CORs were required to meet Interior’s or DOD’s training requirements. DOD and Interior officials told us that no policy or guidance exists on this matter when interagency contracting is used. One COR for the logistics orders in Iraq, who had prior contracting experience, observed problems with two orders as soon as he was designated COR in February 2004. The concerns included: (1) a “clear mismatch” between the underlying contract and the statement of work, (2) the fact that no invoices had been submitted for work that began several months earlier, (3) Army personnel not overseeing and verifying time cards, (4) significant delays and issues in communicating with Interior’s contracting office, and (5) significant problems with the administration of the orders by both the government and the contractor. The discovery of the problems with the Iraq orders encouraged Interior and DOD to take corrective actions aimed at improving management oversight and training, particularly as they pertain to interagency contracting. However, due to the recent nature of these efforts, it is too soon to tell how effective they will be. In June 2004, Interior issued a policy memorandum prohibiting its contracting officers from acquiring interrogation or human intelligence services “regardless of the dollar value” for internal or external customers. Further, to focus attention on proper use of GSA contracts, Interior plans to evaluate its use of GSA contracts in its fiscal year 2006 agencywide targeted performance review, an annual self-reported review by each of its contracting activities focusing on issues that are deemed important by top executives. Also in June 2004, Interior’s National Business Center, which has direct oversight responsibility for the contracting office that placed the orders for DOD in Iraq, clarified for its contracting activities the requirements for competition when ordering on behalf of DOD. At the same time, it updated its policy outlining COR requirements, emphasizing the need for written designation letters; issued new guidance for using BPAs and GSA contracts; and clarified its legal review policy. Moreover, the National Business Center intends to hire an additional manager whose responsibilities will include overseeing the contracting activities under the Center’s purview. Officials at Interior agree that management controls are critical in fee-for-service contracting offices with a focus on customer service, and, in comments on this report, Interior stated that the National Business Center has established a new performance rating system that provides incentives to contracting officials to exercise due diligence. Officials at the Interior contracting office that ordered the services for the Army told us that they are no longer placing orders against the CACI BPA. Once all orders expire, the BPA will be terminated. In addition, in December 2004, the contracting office released a revised independent quality review process to include specific checks for GSA contract actions, including whether the maximum order threshold is exceeded, section 803 competition compliance, and scope determination with a labor category verification. Officials also plan to review the amount of activity on all existing BPAs to determine if these BPAs are still needed and to assess whether prices are competitive. Interior, in commenting on this report, stated that the contracting office has also established a policy to ensure that BPAs are reviewed annually. In addition, Interior has taken steps to improve training for its contracting officers. For fiscal year 2005, Interior has required each of its contracting activities to certify that all warranted contracting officers have taken two training courses on GSA contract use. Further, the contracting office that placed the orders for the Army has re-instituted regular, formal training seminars for newer contracting staff. It has also implemented a new mentoring program to augment training standards and assist new employees in learning on the job. However, a mechanism is not yet in place to track or monitor the training. DOD, for its part, issued a policy in October 2004, signed by high-level officials from the Office of the Comptroller and the Office of Acquisition, Technology, and Logistics, requiring that military departments and defense agencies establish procedures for reviewing and approving the use of other agencies’ contracts. The procedures are to ensure that the use of another agency’s contract is in the best interest of DOD; tasks are within the scope of the contract being used; funding is being used in accordance with appropriation limitations; unique terms and conditions are provided to the ordering activity; and data are collected on the use of outside ordering activities. The procedures took effect in January 2005. Most military services have outlined procedures where the requiring activity is responsible for coordinating with the contracting office, and in some cases the legal and financial offices, when planning to use interagency contracting and for documenting compliance with the policy’s guidelines. While the policy does not include a mechanism for monitoring the departments’ implementation plans, ensuring ongoing compliance with the policy, or sharing information across DOD, agency officials stated that these functions are being performed informally. While the actions Interior and DOD have recently put in place or plan to initiate are positive steps, additional actions are needed to further refine these efforts. Accordingly, we recommend that: The Secretary of the Interior take the following four actions: Ensure that management reviews of Interior contracting offices emphasize and assess whether contracting officials are trained adequately and BPAs are used appropriately. Ensure that performance measures for contracting officials provide incentives to exercise due diligence and comply with applicable contracting rules and regulations. Ensure that CORs are properly designated when contracts are awarded or orders are issued for other agencies and that they have met appropriate training requirements. Direct the National Business Center at Fort Huachuca to take the Establish a consistent methodology for conducting peer reviews of contracting actions and ensure that experienced and trained contracting officials perform the reviews. Ensure that reviews of BPAs are done annually, as required by the FAR, to determine whether they still represent best value. Ensure that the contracting staff are properly trained and effective mechanisms are in place to track the training. The Secretary of Defense take the following action: Develop a mechanism to track implementation of the new policy that establishes procedures for reviewing and approving the use of non- DOD contracts and to ensure that the military services and defense agencies have the opportunity to share information on how they are implementing it. We provided a draft of this report to DOD, Interior, and CACI for review and comment. Their written comments are included as appendices II, III, and IV, respectively. DOD agreed with our recommendation to develop a mechanism to track implementation of the new policy that establishes procedures for reviewing and approving the use of non-DOD contracts. DOD plans to post implementation policies on its web site and is considering establishing a community of practice on this issue. Our draft report contained a second recommendation to ensure that CORs are properly assigned, as appropriate, for all orders that DOD places on interagency contracts and that they are provided requisite training. Because DOD recently concurred with a similar recommendation in another GAO report, we have deleted this recommendation. Interior agreed with all of our recommendations and outlined actions and plans to address the issues that we identified in our report. In general, Interior is taking actions to improve oversight and training for its contracting staff, in particular for the National Business Center offices. In some cases, officials initiated corrective actions during the course of our review, as we brought issues to their attention. While acknowledging that our report identified a number of areas where the government can improve its contracting processes, CACI took issue with several aspects of the report: CACI suggested that our report does not adequately take into account the impact of the wartime environment in Iraq. We believe that our report adequately references the wartime situation. As CACI pointed out, the wartime circumstances may have justified the government’s use of non-competitive contracting procedures. However, such authorized flexibilities were not employed by the agencies involved. Instead, as described in the report, Interior improperly used CACI’s GSA contract in servicing its DOD customer. CACI offered a number of detailed comments to support its position that the orders fell within the scope of the GSA contract. We did not find these arguments convincing. Every government agency involved determined that most of the work performed on the orders was out of scope. Contrary to CACI’s assertion, our finding was not based merely on a comparison of the labor categories in CACI’s GSA contract and those in the orders’ statements of work, but on the material differences between the services authorized by the GSA contract and the services actually ordered by Interior and provided by CACI. While some of the services involved information technology, that, by itself, does not mean that those services (such as interrogation of detainees) can be ordered from CACI’s GSA contract. The GSA contract is for the performance of certain commercial-type information technology services, not for any service that happens to involve the use of information technology. As noted in our report, the Army officials we spoke with stated that the services were not information technology services. In addition, while CACI’s earlier orders from GSA’s Federal Technology Service may help explain how the services in Iraq came to be ordered by Interior, it is not determinative of the proper use of that contract in this situation. On the issue of the contractor playing an unusually large role in actions normally performed by government officials, CACI defends its actions as being appropriate in the wartime environment. The intent of that section of our report is not to suggest that the contractor acted with malfeasance; rather, we highlight the fact that, because the government officials did not exercise due diligence in carrying out their duties, the contractor was either allowed or encouraged to step in to fill the void. Further, CACI refers to our description of out-of-scope construction work and the drafting of the sole source justification as “incomplete and out of context.” Based on our audit work with Interior and CACI officials, we found that CACI intended to include the construction work on the order for intelligence services under the BPA. However, because subsequent decisions by CACI contracting personnel and Interior’s contracting office led to a separate, sole source award, we revised the wording in our report to reflect this outcome. The contractor did—as CACI’s response confirms—draft a sole source justification for additional construction work. As stated above, we included this in our report to demonstrate how the contractor was encouraged to perform duties normally fulfilled by government personnel. CACI questioned whether our findings on the lack of adequate contractor surveillance were well-founded. Our findings are not based solely on our discussion with the contractor interrogator who had been located at Abu Ghraib prison; rather, they are based on our file reviews and a number of discussions with DOD officials. We are sending copies of this report to the Director, Office of Management and Budget, the Secretaries of Defense and the Interior, and CACI. We will make copies available to others on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me on (202) 512-4841 or Michele Mackin on (202) 512-4309. Other major contributors to this report were Alexandra Dew, James Kim, Adam Vodraska, and Tatiana Winger. We conducted our work at the Department of the Interior, including its National Business Center headquarters and office at Fort Huachuca, Arizona; and the Department of Defense (DOD), including the Defense Procurement and Acquisition Policy office and the Department of the Army. We also met with representatives of CACI, International, Inc. (CACI) and the General Services Administration (GSA). To determine what breakdowns occurred in the process of procuring interrogation and other services and the contributing factors to the breakdowns, we reviewed contract files on the 11 orders issued to CACI to understand the facts about how the orders were issued. We also reviewed internal controls and guidance to assess what safeguards were in place to ensure compliance with regulations, including training requirements and performance evaluation factors at the National Business Center’s office in Fort Huachuca. We reviewed the two orders for interrogators, placed in August and December 2003, to corroborate GSA’s and Interior’s determination that the orders were out of the scope of the GSA contract. We also identified and analyzed pertinent policies and regulatory requirements governing the contracting process to assess whether Interior, Army, and contractor officials operated in compliance with those requirements. We interviewed Army representatives who were responsible for overseeing the contractor’s performance in Iraq. We spoke with officials at the Interior’s Offices of Acquisition and Property Management, National Business Center, and employees of the Fort Huachuca office who were involved with the orders for interrogation and other services. Additionally, we interviewed several CACI employees, including a contractor interrogator, and attorneys representing CACI. We used GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) as criteria to demonstrate the importance of management controls such as oversight and training. To evaluate the extent to which actions taken by Interior and DOD address contributing factors to breakdowns in the procurement process, we identified and reviewed steps taken by these agencies, such as newly released policies and guidance. In particular, we reviewed recently issued policies from Interior’s headquarters, National Business Center, and the contracting office at Fort Huachuca, as well as DOD. We conducted our review from July 2004 to January 2005 in accordance with generally accepted government auditing standards. Appendix IV: Comments from CACI International Inc. Selected labor categories from the CACI contract $19,915,407 Provide interrogation support. Provide screening cell management and support. Provide support to man, organize, and execute as members of the Open Source Intelligence Team. Provide special security and security support to the intelligence function. Provide and maintain an operational property book team. Provide technical and functional knowledge of the total property book system. Provide technical and training support services for a military information technology system. Assist in performance of human intelligence and counterintelligence missions. Assist in intelligence support staff and analytical functions. Establish and staff a Command Automation Logistics Assistance/Instructional Team. Provide technical and functional knowledge of the total property book system. Program/Project Manager, Subject Matter Expert I Maximum order values include order modifications made subsequent to the order date.
In recent years, federal agencies have increasingly turned to interagency contracts--where one agency, for example, places an order under an existing contract for another agency--as a way to streamline the procurement process. Interagency contracting can offer benefits of improved efficiency, but this approach needs to be effectively managed. To learn more about some of the challenges of interagency contracting, we reviewed the process that the Department of Defense (DOD) used to acquire interrogation and certain other services through the Department of the Interior to support military operations in Iraq. On behalf of DOD, Interior issued 11 task orders, valued at over $66 million, on an existing contract. This report identifies breakdowns in the procurement process, contributing factors that led to the breakdowns, and the extent to which recent actions by Interior and DOD address these contributing factors. DOD, faced with an urgent need for interrogation and other services in support of military operations in Iraq, turned to the Department of the Interior for contracting assistance. Numerous breakdowns occurred in the issuance and administration of the orders for these services. The breakdowns included issuing orders that were beyond the scope of the underlying contract, in violation of competition rules; not complying with additional DOD competition requirements when issuing task orders for services on existing contracts; not properly justifying the decision to use interagency contracting; not complying with ordering procedures meant to ensure best value for the government; and inadequate monitoring of contractor performance. Because the officials at Interior and the Army responsible for the orders did not fully carry out their roles and responsibilities, the contractor was allowed to play a role in the procurement process normally performed by the government. A lack of effective management controls--in particular insufficient management oversight and a lack of adequate training--led to the breakdowns. When these management controls are not in place, particularly in an interagency fee-for-service contracting environment, more emphasis can be placed on customer satisfaction and revenue generation than on compliance with sound contracting policy and required procedures. Significant problems in the way Interior's contracting office carried out its responsibilities in issuing the orders for interrogation and other services on behalf of DOD were not detected or addressed by management. Further, the Army officials responsible for overseeing the contractor, for the most part, lacked knowledge of contracting issues and were not aware of their basic duties and responsibilities. In response to the above concerns, Interior and DOD have taken actions to strengthen management controls. For example, Interior has re-issued or clarified several policies for its contracting personnel and has required them to take training on the proper use of General Service Administration contracts. DOD has issued a new policy requiring that military departments and defense agencies establish procedures for reviewing and approving the use of other agencies' contracts. These actions are a positive step toward addressing some of the contributing causes to the breakdowns GAO found, but it is too soon to tell how effective they will be.
Our federal tax system relies on voluntary compliance with tax laws. It presumes that taxpayers understand the laws and are willing and able to follow them. If not, IRS must determine the reason and then act to restore compliance and maintain the flow of tax revenue. IRS traditionally has responded to noncompliance by using enforcement efforts such as auditing tax returns and computer matching data from third parties (e.g., banks and employers). Over time, IRS concluded that enforcement was essential to pursue intentional noncompliance but not to correct unintentional noncompliance. Because of this enforcement limitation and concerns about the level of noncompliance, IRS formulated a different compliance philosophy. Known as Compliance 2000, the philosophy envisioned using nonenforcement efforts to correct unintentional noncompliance and reserving enforcement efforts for intentional noncompliance. IRS first espoused this philosophy in 1988 and by the early 1990s had initiated many research projects across IRS’ 63 district offices to identify noncompliant market segments, root causes for the noncompliance, and innovative ways to improve compliance. Even so, noncompliance continued to result in major losses in tax revenue. IRS’ most recent estimate put the gross income tax gap—the difference between income taxes owed and voluntarily paid—at $127 billion for 1992 alone. IRS estimated total tax compliance to be about 87 percent—83 percent in taxes paid voluntarily and 4 percent in taxes paid after IRS enforcement. IRS data have shown such total compliance to be stagnant since the early 1970s. Concerns about these trends prompted IRS to create the Compliance Research and Planning approach in 1993. This new approach attempts to merge the Compliance 2000 philosophy with a rigorous compliance research system. By combining IRS’ National Office knowledge about research with its district knowledge about compliance and enforcement, IRS hoped to identify nonenforcement and enforcement efforts to help improve total compliance to 90 percent by 2001. This approach has required the establishment of new research methods, organizations, and tools. The research methods include a compliance research cycle that starts with identifying a noncompliant market segment and ends with using research results in ongoing compliance programs. The organizations include the National Office of Research and Analysis (NORA) in the Research Division and 31 District Offices of Research and Analysis (DORA). NORA has responsibility for developing and implementing the new approach. DORAs are responsible for researching national and district compliance levels and finding cost-effective wholesale solutions to noncompliance, with the support of three IRS functions—Examination, Collection, and Taxpayer Service. Appendix II discusses the research cycle and organizations. As planned, the major research tool will be the Compliance Research Information System (CRIS). CRIS is to be an integrated network of databases containing a sample of IRS data over multiple years for use in compliance research. Appendix III discusses CRIS. Our objectives were to (1) review the many lessons that IRS learned from past compliance efforts, including Compliance 2000, to identify the factors most critical to the success of the new compliance research approach and (2) analyze the current status of the new approach and its ability to incorporate these factors as well as help IRS achieve the goal of 90-percent total compliance by 2001. To accomplish each objective, we visited IRS’ National Office and all 31 DORAs, interviewing responsible officials and collecting relevant data. Our National Office work focused on NORA. We interviewed NORA officials and collected data on the plans for and status of the new research approach. We discussed the officials’ views on lessons learned from past research and factors critical to the success of the new approach. Our fieldwork focused on visits to all 31 DORAs to monitor implementation of IRS’ new approach. To ensure consistent data collection, we did 293 structured interviews. The interviewees included 31 District Directors; 31 DORA Chiefs; 92 Chiefs of Examination, Collection, or Taxpayer Service; and 139 DORA staff (about 80 percent of the staff at the time of our visits). Our interviews solicited information on the lessons learned and critical success factors as well as on the status of the new approach. We obtained information on all DORA staff, such as positions and education (see app. V), and on Compliance 2000 research projects (see app. I). After we finished our DORA visits in September 1995, events occurred that could affect the new research approach. We conducted structured follow-up interviews with NORA officials and the 31 DORA Chiefs to determine the real and potential effects of these events, including IRS budget cuts and postponement of TCMP. We did our work in Washington, D.C., and the 31 DORAs from April 1995 to January 1996 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from you or your designee. On April 22, 1996, we obtained comments from responsible officials in IRS’ Compliance Research Division. Their comments are discussed on pages 18 and 19. IRS viewed its Compliance 2000 strategy as a way to advance voluntary compliance. The strategy differed from the traditional enforcement approach by recognizing that nonenforcement approaches, such as education and assistance, can boost compliance. To determine when it was best to use each approach, IRS sought to uncover root causes for noncompliance and distinguish between compliant and noncompliant taxpayers across market segments. Compliance 2000 did not work for various reasons. In 1992 testimony, we reported that IRS lacked the necessary compliance data and infrastructure to do research by market segments. We found that IRS had not tracked whether its districts started research projects on the basis of objective compliance data or researched the most noncompliant market segments. We concluded that Compliance 2000 was a worthy idea that needed careful implementation. We stated that IRS needed to use objective data to select research projects and develop an infrastructure for planning, managing, and monitoring the projects. An IRS Internal Audit report in December 1993 had similar findings. The report disclosed that 38 of 50 Compliance 2000 projects were traditional enforcement projects that the districts had renamed as Compliance 2000 projects. And IRS had no database to capture results or provide an inventory of the compliance issues covered. The report concluded that the projects did not represent rigorous research, the managerial controls were weak, and a management structure was needed to provide effective oversight. NORA officials acknowledged such problems and indicated that very few Compliance 2000 projects could be viewed as viable research. Even in the few projects that NORA officials viewed as viable, IRS had not created a database to show whether compliance increased and, if so, what actions prompted those increases. We sought to further confirm these problems by collecting data on Compliance 2000 projects as we visited the 31 DORAs. We confirmed that IRS did not track the methods and results of the projects. As shown in appendix I, we found that many projects were duplicated. Available records were insufficient for us to compare costs and benefits across the projects. We found that IRS learned many lessons about research from Compliance 2000. According to NORA officials, the major lesson was that IRS needed a totally new organization and approach because the decentralized approach under Compliance 2000 did not produce viable research. Our interviews at NORA and the 31 DORAs indicated that such lessons governed the design of the new approach, particularly those that IRS officials pointed to as factors critical to the success of this approach. These factors include the need for (1) support for the research across IRS, (2) objective compliance data that are readily accessible, (3) skilled staff, (4) a sound infrastructure to organize and manage the research, and (5) measures to evaluate how well the new approach is working. The next section discusses the potential of IRS’ new approach in the context of these five factors. IRS holds high hopes for its new compliance research approach in integrating the Compliance 2000 philosophy with efforts to boost compliance. To act on this potential, IRS has taken steps built on lessons learned from past efforts. While directives are important to set the vision, building support relies on collaboration. In this vein, NORA has developed a cooperative strategy to communicate the research vision, needs, and results as well as generate feedback on the needs of IRS districts and functions. Given such feedback, NORA plans to create a special unit to meet the needs for research on ways to better select and handle workload. NORA is also encouraging DORAs to provide short-term research assistance to districts and functions (e.g., electronic filing and earned income credit). Ultimately, NORA knows that the new approach will have to prove its worth to build the necessary staff support. IRS’ new approach depends heavily on CRIS. As envisioned, CRIS is to be IRS’ network of databases for identifying the nationwide and district compliance of market segments. IRS is implementing CRIS in the following three stages. Working File CRIS was used in fiscal years 1994 and 1995 for training DORA staff. It had 75 data elements limited to one market segment. Interim CRIS was delivered to all DORAs by fall 1995. It expanded to 800 data elements and samples of individual and business filers for all market segments. Final CRIS is slated to implement its first database, having over 2,500 data elements on a sample of 7 to 10 million individual filers, in fiscal year 1997. It is to interface with other systems being created to aid in storing data and assigning workload. It is expected to contain 3 years of data. On completion, CRIS is to have 10 databases, each with thousands of data elements. CRIS has been funded for $7 million to develop and maintain these databases over the next 5 fiscal years. If CRIS works, IRS would have an integrated network of recent compliance data. And, IRS research staff could quickly profile compliance by market segment. IRS expects CRIS to provide data on taxpayer compliance in (1) filing required tax returns in a timely manner, (2) accurately reporting information on tax returns, and (3) fully and timely paying taxes owed. Also, realizing that IRS data contain taxpayer and IRS errors, NORA has developed data validation standards. NORA officials believed that these standards will better ensure that the research stems from adequate data. Past research efforts highlighted the need for staff who had research skills. Toward that end, NORA devised a staffing plan that requires certain positions at each DORA, such as a chief, program analysts, and other generalists. NORA also created specialist positions that require skills in statistics, operations research, economics, and computers. Recognizing limitations in having such staff in the field and restrictions on external hiring given the redeployment agreement, NORA encouraged DORA chiefs to fill positions with the most qualified staff available. NORA expected the number of staff to initially total about six to eight per DORA and grow as workload dictated. NORA also devised a plan to train all DORA staff in research methods. Phase I training, which began in early 1995, described NORA, DORA, CRIS, profiling, statistics, and research methods. Phase II training includes advanced methods in statistics, research, and market segmentation. NORA is also offering customized training to meet the needs of DORA staff. IRS has laid the framework for the infrastructure it believes is needed to manage the new research approach. This framework includes NORA, DORAs, a research plan, and research methods. IRS has plans for other mechanisms to manage the research. NORA and DORA officials said research in the field has often suffered because research knowledge resided in the National Office, but knowledge about compliance and enforcement resided with district staff who usually lacked research skills. These officials said districts lacked commitment to do the research and use its results. NORA officials viewed the NORA/DORA framework as a way to correct these problems. Furthermore, IRS districts are forming Compliance Planning Councils (CPC) at the DORA level to build district support for compliance research, oversee district compliance programs, and provide a conduit to the three district functions. In summary, CPCs are to provide a multifunctional perspective in reviewing district compliance workload. At a minimum, each CPC is to consist of the DORA Chief and chiefs of the three functions. IRS is also forming nine Cooperative Strategy Working Groups (CSWG) to help with oversight, coordination, and implementation of the new approach. CSWGs are to make many of the decisions about compliance research, with the concurrence of the national director for compliance research. NORA is developing an annual research plan and a compliance research cycle. If implemented properly, both elements should create a common understanding of the research vision and enhance the quality of the research. The research plan prioritizes compliance issues and research efforts. The plan allocates resources across DORAs to meet expectations, within set time frames, on (1) establishing the new research approach, (2) helping IRS districts and functions to meet their compliance and enforcement needs, and (3) reducing the tax gap and improving compliance. The research cycle outlines the steps for all projects, as shown in figure 1. As shown above, the latter steps of the cycle produce research results that form the basis for establishing compliance workloads, as set in the compliance plan. Appendix II provides details on the research cycle and compliance plan. For fiscal year 1995, IRS measured the success of the new approach against expectations set forth in IRS’ Business Master Plan; those expectations focused on establishing all DORAs. For fiscal year 1996, success is to be measured against expectations set forth in the research plan. NORA officials acknowledged the need for more specific measures of success. Combining all five factors, IRS’ new compliance research approach offers potential for improving compliance. If implemented successfully, it also may enhance the effectiveness of the tax system. Rigorous research could help ensure that tax laws, regulations, and guidance are clear; taxpayers receive necessary assistance; paid preparers encourage compliance; and enforcement is cost effective. Integrating the research with ongoing programs could help meet these basic requirements, to the extent the research helps increase compliance and reduce taxpayer burden. In doing so, the research would co-exist rather than compete with these programs. Some obstacles have slowed implementation of IRS’ new compliance research approach. IRS has taken actions to overcome the obstacles but faces critical challenges in incorporating the success factors. NORA and DORA officials raised concerns about support for compliance research because of three types of tensions. Our interviews with 293 District Directors, CPC members representing the three functions, and DORA staff illustrated these concerns as well as mixed support for the new approach. For example, about 63 percent of those we interviewed believed that this approach will reduce the tax gap, and nearly 70 percent, who had knowledge of previous attempts, believed that it will be more cost effective. However, only 38 percent of those we interviewed believed that the approach will significantly contribute to meeting IRS’ 90-percent compliance goal by 2001. When asked why, most of these officials viewed this goal as too challenging and the time period as too short. Proponents of the new approach favored its systemic and objective nature. They viewed national research on market segments, by reaching more noncompliant taxpayers, as the way to significantly improve compliance. Opponents, believing that major compliance problems are well known, favored shifting the research resources into efforts involving tax simplification and legislative changes, such as tax withholdings and income reporting. NORA officials noted that compliance research offers the best way to identify and justify such efforts. The first tension dealt with changing the IRS culture. IRS has focused on maximizing revenue yield through enforcement instead of voluntary compliance through enforcement and nonenforcement efforts. Our DORA work showed that the three functions largely expected the research to aim at this traditional focus. Given concerns that it will not, only 34 percent of the Chiefs of Examination, Collection, and Taxpayer Service we interviewed at the 31 DORAs considered DORA to be a good investment of resources. NORA officials believed that these responses did not reflect the broad, multifunctional view needed to increase compliance. These differing views reflect, at a minimum, the tension over the new approach. A second tension involved pressures to quickly produce high-profile results. We heard this concern during interviews at all 31 DORAs. Interviewees doubted whether IRS would give the approach time to prove itself. They said IRS often expects results right away, but compliance research is unlikely to produce immediate benefits. The third tension dealt with directing 85 percent of the DORA work to national compliance issues, leaving the remainder to the discretion of the district. District officials, who believed that many compliance issues have a local flavor, generally wanted more control. NORA officials, as well as DORA Chiefs, saw a national focus as the way to help improve compliance and reduce the tax gap. NORA officials recognize the seriousness of these and other tensions that undercut support for the new approach. NORA officials have planned various efforts to educate and inform IRS management and staff at all levels on the new approach as well as to advance its cooperative strategy. IRS decided to open DORAs a few years before CRIS was finished to allow DORAs to become fully staffed and equipped, as well as to participate in the development of CRIS and learn about IRS data. While IRS has made progress, questions remain on whether CRIS will be completed soon enough to contribute to research on improving compliance by 2001. At the time of our DORA visits, only 19 percent of the DORA staff viewed the available data, which were from Working File CRIS, as sufficient to do their jobs. DORA staff complained that the data were outdated, inaccurate, and lacked compliance measures. After our visits, DORA staff began using data from Interim CRIS. The DORA Chiefs we interviewed during our follow-up work viewed the Interim CRIS as a far better system. However, only 39 percent of them thought the data were sufficient to do the work required at DORA. Among other things, they noted that Interim CRIS lacked historical data, compliance indicators, and enforcement actions against filed tax returns as well as on nonfilers. NORA officials acknowledged these problems but had viewed these earlier phases as training for DORA staff. They believed that the staff had sufficient data for such training and the assigned work. The officials said that Final CRIS and the data validation standards will address these problems and add discipline so that the staff only does the work made possible by the data. Even so, Final CRIS is developing more slowly than expected. NORA officials remain optimistic that its first database, involving individual filers, will be operational by fiscal year 1997. As for the other nine databases, such as for partnerships and corporations, IRS was not sure when they would be fully operational and how many can produce research results by 2001 on improving compliance. Furthermore, the postponement of TCMP heightens the need for finding other ways to measure reporting accuracy on filed returns. NORA officials told us that they viewed TCMP as a crucial part of CRIS because TCMP had been a proven way to measure reporting accuracy. Over three-fourths of the $94 billion tax gap for individuals in 1992 arose from noncompliance in reporting rather than in filing or in paying. IRS officials said that, without the randomness and comprehensiveness of TCMP, they doubted whether IRS will have a precise way to measure reporting compliance nationally or at the DORA level, or whether IRS will have a basis for identifying emerging noncompliance among market segments or tax issues. CRIS is using TCMP results for 1988, but these results will lose more of their usefulness with each passing year. NORA officials said they are not sure how they will measure reporting compliance. At the time of our visits, the 31 DORAs had 217 staff, varying from 4 to 12 staff at each DORA. Of those we interviewed, 85 percent of the District Directors and DORA Chiefs were satisfied with the number of staff, but 94 percent of the chiefs and 74 percent of the directors thought staffing should increase in the future; the rest were uncertain. Given IRS’ budget cut, DORA Chiefs expressed concern that staffing would not increase in fiscal year 1996 as anticipated. Figure 2 shows the number and types of positions for all 31 DORAs, excluding the 31 DORA Chiefs. The category of “other” includes Assistant DORA Chief, Diversity Coordinator, Fed-State Coordinator, Magnetic Media Specialist, Public Affairs Specialist, and Acting Team Leader. We analyzed the distribution of positions across the 31 DORAs. Our analysis showed that 21 DORAs lacked 1 or more of the required specialist positions involving economic, statistics, computer, and operations research skills. For example, Seattle had two program analysts, one operations research analyst, and one economist, while Los Angeles had five program analysts, two operations research analysts, and one Fed-State coordinator. Neither site had a statistician or a computer research analyst. Although over half of the DORA positions involved specialist skills, DORAs had difficulty finding such staff. Only 58 percent of the DORA Chiefs said their staff had the requisite background and skills; they pointed to gaps in skills such as statistics, economics, and operations research. Our analysis showed that 37 percent of the staff we interviewed had some research experience, and 5 percent had spent most of their career in a research capacity. Of DORA staff holding college or graduate degrees, about half of these degrees were in business or liberal arts; less than 30 percent related to specialist positions. Both NORA and DORA officials we interviewed pointed to the IRS redeployment agreement and limits on hiring staff from outside of IRS as barriers to getting the most qualified staff for doing research. Over 50 percent of the DORA staff were hired as redeployment eligible. Sixty-five percent of the District Directors and DORA Chiefs said the redeployment agreement limited their ability to staff DORAs with the most qualified employees. Because many DORA staff do not have the research skills needed, NORA is working on ways to share specialized skills across the research projects and DORAs. Plans call for identifying the necessary specialist skills before starting a project and finding specialists from NORA or the DORAs who can work on the project when needed. NORA officials said a project will not start if needed specialists cannot be found. NORA officials agreed that DORA staff could benefit from more specialized skills, but they were pleased with the staff overall and their ability to learn. Given these views, NORA has executed what it views as an aggressive training plan. As for phase I of the training, 94 percent of the DORA Chiefs viewed it as at least generally adequate; 83 percent of DORA staff agreed. Staff who thought the training could be improved wanted more training in statistics, data analysis, economics, research design and methodology, and computers. A NORA training survey of DORA staff also identified similar training needs. NORA officials said training in these areas is being developed. NORA has developed a plan for phase II training and a budget of $3.5 million. NORA has planned internal computer courses and external courses on topics such as research methods and use of research. NORA officials said that the training budget had been fully allocated as of March 1996, but that funding had not been obligated. If the funding is not received soon, projects may be delayed. IRS has not completed the infrastructure for planning and managing the research, although progress has been made. For example, until November 1995, NORA had not started to develop linkages with programs in the functions that used market segments due to other priorities. Until the linkages are developed, unnecessary duplication may occur and opportunities to improve these programs may be missed. NORA officials said linkages will be made when functions ask for profiles and research by market segment as well as through the compliance plan. Objective criteria for selecting research projects had not been fully established. Without such criteria, NORA cannot ensure that staff research the major areas of noncompliance. Our interviews and review of the research plan showed that many projects arose from districts’ or functions’ beliefs about the major areas of noncompliance. Other projects were selected with more objective data from TCMP, the tax gap, or other studies; however, such data reflected compliance in the 1980s. NORA officials acknowledged a desire for more objective and recent data in selecting projects but believed that enough of the initial projects dealt with known compliance problems to avoid wasted efforts. The officials said CSWGs are responsible for establishing criteria for selecting and ranking projects and are working with five DORAs on such criteria. CPCs also were not fully developed. As of December 1995, districts had established 28 of the 31 CPCs; most CPCs had only met a few times, largely to get organized. Although 55 percent of the CPC members we interviewed said CPCs were at least generally effective, 21 percent said they were not, and 24 percent thought it was too soon to tell. CPCs included members who managed the three district functions. If developed, CPCs could help link compliance research to the needs of the district functions. Starting in March 1996, NORA implemented a system to start tracking the status and results of research projects. NORA relies on DORA staff to input a lot of data about the projects and research into the system. However, controls over accurate and complete data entry have not yet been fully developed. IRS has not developed specific measures for evaluating the success of the new research approach. Of the 62 District Directors and DORA Chiefs we interviewed, 73 percent cited a need for better measures. Most of these interviewees suggested measuring impacts of the research on compliance, particularly by market segment or district. NORA and DORA officials believed that success will be based, in part, on the support and demand for research from the three functions. Two CSWGs were working on ways to measure success, including (1) a peer review system and (2) a quality review of the research process and its results. NORA expects them to be finished during the spring of 1996. Without good measures, IRS will not be able to objectively evaluate its new approach. IRS faces the challenge of developing valid measures that will be meaningful to customers inside and outside of IRS. IRS’ goal to increase total compliance with the tax laws to 90 percent by 2001 is a worthy one. IRS estimates have shown that decades of attempting to improve compliance through enforcement failed to raise total compliance above about 87 percent. IRS’ new approach of supplementing its enforcement efforts with rigorous research into the causes of noncompliance strikes us as being intuitively logical. On the basis of lessons learned from the past, IRS officials believe, and we agree, that among the factors needed to better ensure success of the new approach, at least five stand out in terms of relative importance: (1) support for the research throughout IRS, (2) objective compliance data that are readily accessible for research, (3) skilled staff capable of doing rigorous research, (4) an infrastructure for organizing and managing the research, and (5) measures to evaluate whether the new approach works. We identified several issues that IRS needs to address in terms of these five critical success factors. The mixed support we found for the new research approach has caused tensions within IRS that could have an adverse impact on potential success. The fact that IRS might not have objective data available when needed for the research effort may make it difficult to produce useful research results in a timely manner. Furthermore, unless specialized staff are available when and where needed, the research effort could also be hampered. Finally, IRS has not yet fully developed the infrastructure needed to plan and manage the research, nor does it have measures to use in evaluating the success of the new approach. IRS has taken or planned some actions to address these issues. It has developed mechanisms designed to build support for the new approach. Working with existing resources in the face of budget constraints, IRS has developed training and staff-sharing programs to help address specialized staffing needs. IRS is also working to (1) enhance the infrastructure by tracking projects and linking research and compliance programs and (2) develop measures for evaluating the success of the new approach. Effectively addressing each of these issues should enhance IRS’ potential for success. Thus, it is important that IRS monitor its progress in addressing these issues and position itself to take corrective action if and when needed. We recommend that the IRS Commissioner develop an approach for monitoring the effectiveness of mechanisms established to build support for the new approach as well as for the staff-sharing and training efforts that are under way and, if necessary, make modifications; devise a method to better ensure that reliable compliance data will be available when needed for the research effort, given the indefinite postponement of TCMP; set a schedule for completing CRIS, monitor its progress, and take the necessary actions to resolve identified problems; and establish milestones and monitoring mechanisms for (1) completing the infrastructure needed to organize and manage the research effort and (2) developing the measures needed for evaluating success. We obtained oral comments on a draft of this report from senior IRS officials in a meeting on April 22, 1996. IRS officials included the National Director for Compliance Research, the Chief of National Office Research and Analysis, and a representative from IRS’ Office of Legislative Affairs. In general, these officials agreed that the report accurately reflects the key issues in IRS’ new compliance research and analysis approach. They further agreed with our conclusions and recommendations and noted the following actions were being planned or taken on each of our four recommendations. First, in developing an approach for monitoring mechanisms for building support and efforts in staff sharing and training, the IRS officials said they will be monitoring all such mechanisms and efforts, particularly use of the cooperative strategy and other outreach efforts about the new approach. Second, in devising a method to provide reliable compliance data, these officials acknowledged the problems with losing the comprehensive, top-down measures of TCMP but said IRS has sufficient compliance data in the short term for the research work to continue. Third, these officials said action is already being taken to set a schedule for completing CRIS, monitoring its progress, and resolving related problems. Recently, IRS has required all computer systems under development, including CRIS, to have established milestones and a completion schedule that will be monitored internally. Fourth, in establishing milestones and monitoring the completion of the infrastructure as well as of the measures, the IRS officials said the fiscal year 1997 research plan will provide the means for doing these activities. They said IRS’ new system for tracking the status and results of research projects is expected to be operational by June 1996, and measures for evaluating the success of the new research approach are being developed. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on the recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this letter. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this letter. Copies of this report are being sent to interested congressional committees, the Director of the Office of Management and Budget, the Secretary of the Treasury, and other interested parties. It will also be made available to others upon request. Major contributors to this report are listed in appendix VI. Please contact me on (202) 512-9044 if you or your staff have any questions about this report. This appendix contains a summary of the results from our data collection on Compliance 2000 Projects. We provided a data collection instrument to responsible officials at 31 District Offices of Research and Analysis (DORA) for completion during the summer of 1995. (DORAs are not an outgrowth of Compliance 2000. Compliance 2000 was a district (old 63-district configuration) configuration. However, we chose to collect data from the DORAs because many of the DORAs had received data from Compliance 2000 offices after they were closed.) Of the 31 DORAs, 28 reported information on 133 Compliance 2000 projects. We found that important information was unavailable for most of the projects. For example, none of the projects reported whether compliance improved or not. Of 133 reported projects, 70 reported no information on either the results or the resources spent. Of the 63 for which such information was available, 35 provided information on the resources, 37 provided information on enforcement results (e.g., dollars assessed and returns obtained), and 24 provided information on nonenforcement activities (e.g., number of seminars held and publications issued). Table I.1 provides this information by DORA. Furthermore, we noticed that many projects dealt with the same general topic, such as compliance in filing information returns on miscellaneous income, nonfilers, and tax-exempt organizations. Table I.2 shows that of the 133 projects, 72 duplicated at least one project. This appendix contains information on various aspects of the new compliance research approach. It provides details on the research infrastructure needed to sustain the new approach. As envisioned, the National Office of Research and Analysis (NORA) and the District Offices of Research and Analysis (DORA) will collaborate to conduct new activities that form a disciplined research cycle—Compliance Research and Planning Cycle. This cycle encompasses 10 steps in addressing a compliance problem. Table II.1 describes each of these steps. Measure levels of compliance across market segments in (1) filing timeliness, (2) reporting accuracy, and (3) paying taxes owed in full and on time. Identify and rank market segments with significant compliance problems. Profile market segments to identify patterns of noncompliance, validate their selection, and enrich the understanding of the common characteristics that distinguish a given segment from other segments. Identify potential treatments to improve compliance after determining and understanding the causes/reasons for noncompliance. Test treatments to determine if they have produced significant, measurable improvements in a market segment’s compliance level over an original baseline. Remeasure compliance levels and evaluate whether the applied treatments were effective in improving the compliance of market segment. Generate the compliance plan to drive all compliance-related workload for IRS. Allocate compliance resources to match needs with staff and other resources at national and district levels. Plan compliance workload to match staff (grade and skill levels) to the scheduled work. Select compliance workload by identifying cases, accounts, or groups of taxpayers to work in a way that will meet plan objectives within the district work plan schedules and resources. NORA is responsible for supporting, guiding, and coordinating work at DORAs. The first priority of NORA was to establish DORAs and ensure that they were staffed, equipped, and operational. NORA also is responsible for evaluating the overall research approach and its components. Specifically, NORA is to (1) work with all levels and functions in a consulting role to support market research activities, (2) assist National Office and field executives in institutionalizing Compliance 2000, (3) provide compliance data necessary to develop a multiyear strategic compliance plan, (4) develop new case selection criteria that are based on market research, (5) supply data to the national portion of the compliance plan, (6) propose national initiatives to improve compliance in selected market segments, (7) advise and issue progress reports to the Director of Research and Chief Compliance Officer, (8) review DORAs’ work to ensure that national program objectives are met, (9) ensure that DORAs provide quality service, (10) develop methods for measuring the compliance of various market segments, (11) ensure consistent and frequent communication and feedback with internal and external stakeholders, (12) ensure that DORA training needs are identified and met, and (13) provide guidance and control to DORAs in handling external data. The primary function of each DORA site is delivery of a local-level compliance research capability using local knowledge and resources. DORA staff are to be primarily responsible for providing information, guidance, and counsel to the district offices on methodologies and strategies that address areas of noncompliance, given resource allocation constraints, and compliance plan objectives. As DORA staff learn to do compliance research, they are expected, in the short term, to (1) learn proper research procedures and processes, such as techniques, methodologies and data analysis, data sources, security, and privacy issues; (2) research and evaluate local external data sources; (3) begin assessing the potential for additional market segments and estimating the nonfiler population; (4) learn elements and practice proper usage of internal and external data; and (5) provide data and measurements for past Compliance 2000 projects. The Cooperative Strategy Working Groups (CSWG) were established to design, plan, and implement decisions that help maintain the vitality of the new research approach. NORA and DORAs provide the members. Each group is to have a statement that describes its responsibilities, composition, and schedule. The groups are expected to develop guidelines to ensure the effectiveness of their work. However, interim guidelines to help get CSWGs started were developed by NORA. CSWGs are to be implemented in three stages: (1) “First-Wave,” by the beginning of fiscal year 1995; (2) “Second-Wave,” by the end of fiscal year 1995; and (3) “Longer Term.” Each stage represents a series of working groups. The First-Wave stage consisted of the Policy and Governance, Data Development and Planning, Education and Training, Profiling, and Communications cooperative strategy working groups. The Second-Wave stage consisted of the Compliance Studies and Tests, NORA/DORA Research Planning, and Systems Development cooperative strategy working groups. The Longer Term stage consists of the Resources Cooperative Strategy Working Group. The following describes each of these groups. Policy and Governance Cooperative Strategy Working Group: (1) identifies compliance research issues; (2) determines procedural requirements for NORA and DORAs; and (3) formulates and recommends policies and procedures to address those issues and requirements. Data Development and Planning Cooperative Strategy Working Group: (1) exercises oversight and operational roles in the design, development, acquisition, use, maintenance, and evaluation of internal and external data and (2) measures the support of compliance research operations. Education and Training Cooperative Strategy Working Group: carries out the oversight, development, and operation for internal and external training provided to NORA and DORA staff. Profiling Cooperative Strategy Working Group: (1) oversees market segmentation and profiling operations and (2) formulates and recommends profiling standard procedures and the design and testing of compliance measures. Communications Cooperative Strategy Working Group: oversees, develops, and maintains mechanisms and the media for communications on compliance research. Compliance Studies and Tests Cooperative Strategy Working Group: (1) oversees compliance studies and tests and (2) recommends compliance research standards for conducting, analyzing, and reporting compliance studies and treatment tests. NORA/DORA Research Planning Cooperative Strategy Working Group: (1) develops and provides input into the compliance plan and (2) reviews other IRS plans. Systems Development Cooperative Strategy Working Group: (1) oversees the design, development, implementation, and evaluation of the technology used in compliance research and (2) addresses issues regarding the hardware, software, and telecommunications surrounding compliance research. Resources Cooperative Strategy Working Group: (1) determines staffing and financial resources requirements for all compliance research and (2) ensures that resources are allocated according to the compliance plan. The Compliance Planning Council (CPC) is to be responsible for multifunctional integration, planning, and coordination of compliance activities within the District. Compliance activities are expected to focus on research, identification of market segments, and development of strategies to deal with noncompliant behavior. Specific activities of CPCs may include advising the District Director and assisting in the identification and prioritization of the DORA workload, approving and allocating resources to compliance treatment plans and other multifunctional compliance initiatives, monitoring ongoing progress of projects and initiatives, and ensuring consistent and frequent communication and feedback with internal and external stakeholders. CPC membership may consist of the (1) Chief of Examination, (2) Chief of Collection, (3) Chief of Taxpayer Service, (4) Chief of Criminal Investigation, (5) Chief of DORA, (6) Chief of Information Systems Division, (7) Disclosure Officer, (8) President of the National Treasury Employees Union, (9) Problem Resolution Officer, (10) District Counsel, (11) Appeals, and (12) Employee Plans/Equal Employment Opportunity. The research plan is to apply NORA and DORA staff resources to national workload during fiscal year 1996 and beyond. Resources are to be used efficiently to avoid unnecessary duplication of effort. The plan is to link NORA/DORA work to IRS’ fiscal year 1996 Business Master Plan and to the major components of the tax gap. The research plan is to lay out research projects that can have a national impact on compliance and assigns the projects to one or more DORAs. It is to cover fiscal years 1996 through 1998, and be flexible enough to accommodate new opportunities and new research findings to redirect national efforts. The compliance plan is to set forth all compliance-related workload for IRS. The scope and duration of the activities it mandates are likely to occupy several years. The compliance plan is expected to comprise both enforcement and nonenforcement activities. For this reason, it is expected to mandate actions both for functions within the Chief Compliance Officer organization as well as for functions with Customer Service organizations. When the national component of the compliance plan includes activities that transcend Chief Officer organizational boundaries, it is to be issued jointly by the Chief Officers concerned. Once officially issued, the compliance plan is to become the basis for final resource allocations, functional workplans, and workload selections. This appendix contains additional information on the final Compliance Research Information System (CRIS) database. It provides more details on the CRIS infrastructure and types and sources of data required. As envisioned, CRIS will be the primary integrated research tool used for compliance research and analysis. Plans call for CRIS to be an integrated network of 10 databases containing a sample of internal, external, and multiyear data, which is to be accessible to national and district office personnel to support analyses of voluntary compliance rates and levels. CRIS is expected to enable IRS to develop working hypotheses on the means to increase voluntary compliance, test the hypotheses, evaluate the results, and make decisions on how to implement the new strategies. IRS also envisions that CRIS will improve both the quantity and quality of data as well as sophisticated analysis. The vast majority of CRIS data is expected to come from statistically reliable samples drawn from the following IRS data sources: (1) the individual master file and returns transaction file, (2) the business master file and returns transaction file, (3) various other internal master files, (4) results data from the Taxpayer Compliance Measurement Program, and (5) various other taxpayer surveys and studies. The only external data planned for CRIS are census data. However, external data may be used for follow-on research after noncompliant market segments are identified by the objective application of CRIS measures to internal IRS sample data. External data sources will not be appended on a taxpayer-by-taxpayer basis to internal CRIS data. CRIS is designed to be a sample with no taxpayer identifiers. All internal CRIS data are to be transmitted electronically or via magnetic tape. External data are to be provided to the CRIS system via magnetic tape. Validity and consistency checks will be performed on internal data before their input to CRIS. IRS also plans to validate data from external sources. As planned, most of the information in the CRIS system is to be updated once a year, although some data may need to be updated as often as every 3 months. Data from external sources are to be updated on an as-needed or as-available basis. Samples are to represent taxpayers from the current year and 2 previous years. To provide the data needed for specialized market segmentation, the CRIS system is to comprise 10 databases. IRS has come up with the following 10 database models. (1) Form 1040 Individual/Family Filers (income tax filers using forms 1040, 1040A, and 1040EZ) (2) Corporations (3) Sub-Chapter S Corporations (corporations that file under the chapter S provision distribute corporate income and losses to their shareholders) (4) Partnerships (5) 94X Employers (Employers filing Forms 940, 941, 943, etc.) (6) Fiduciary (7) Individual Non-filer Case Leads (operational data) (8) Industries (9) Collection Research File (operational data) (10) Audit Information Management System (operational data) The only database that is currently being developed is the Form 1040 Individual/Family Filers Database. It consists of a stratified random sample of the universe of individual taxpayer accounts for a specific tax period. The database includes general entity information and account information on the current and 2 prior years’ returns, as well as tax return line items for the current and 2 prior years. Related data include information return documents and, for Schedule C and F filers, data extracted from the business master and returns transaction files, the payer master file, the employee plans master file, and various other internal sources. This appendix combines the results of five data collection instruments used to conduct structured interviews with District Directors; Chiefs of DORA, Examination, Collection, and Taxpayer Service; and DORA staff. In total, we interviewed 293 officials from April to December 1995. Some percentages may not equal 100 due to rounding. This appendix contains the results from our District Offices of Research and Analysis (DORA) staffing data collection instrument that was provided to all 31 DORA Chiefs for completion during the summer of 1995. The Chiefs reported 217 staff onboard during our field visits. The average and median number of staff per site was 7, and staffing ranged from 4 to 12 people per site. The following tables provide more details about the DORA staff. Some percentages may not equal 100 due to rounding. Computer Research Analyst and related computer positions Not applicable. Susan Malone, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Internal Revenue Service's (IRS) tax compliance research program, focusing on: (1) the success IRS has had with its new research approach; and (2) IRS ability to implement lessons learned from its Compliance 2000 initiative. GAO found that: (1) IRS implemented its new compliance research approach to address concerns over taxpayer noncompliance and the large gap between income taxes owed and taxes paid; (2) IRS attempted to address these concerns through Compliance 2000, but had limited success due to inadequate compliance data; (3) IRS could avoid these mistakes by establishing more support for its research, using objective compliance data, acquiring more specialized staff, developing an organizational infrastructure, and setting objective measurements; (4) IRS officials believe that this new research approach is more cost-effective, but they doubt that they will reach 90-percent compliance by 2001; (5) IRS officials are concerned that district offices will spend 85 percent of their resources on national compliance issues rather than on district-level issues; (6) IRS has made some progress in developing the Compliance Research Information System (CRIS), but it is unsure when it will become available; (7) IRS officials believe that more training is needed in specialized areas to achieve research objectives; and (8) IRS is in the process of developing tools that track and measure the success of its research projects.
This section provides context for understanding the history and use of congressional direction for appropriated funds. It traces the development of authority for congressional direction of funds from the U.S. Constitution to the current focus on reducing the number and amount of earmarks in appropriations legislation. The Constitution gives Congress the power to levy taxes and raise revenue for the government, to finance government operations through appropriation of federal funds, and to prescribe the conditions governing the use of those appropriations. This power is generally referred to as the congressional “power of the purse.” The linchpin of congressional control over federal funds is found in article I, section 9, clause 7 of the Constitution, which provides that “No money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law.” Thus, no officer or employee of the government may draw money out of the Treasury to fund agency operations unless Congress has appropriated the money to the agency. At its most basic level, this means that it is up to Congress to decide whether to provide funds for a particular program or activity and to fix the level of that funding. It is also well established that Congress can, within constitutional limits, determine the terms and conditions under which an appropriation may be used. In other words, Congress can specify (or direct) in an appropriation the specific purposes for which the funds may be used, the length of time the funds may remain available for these uses, and the maximum amount an agency may spend on particular elements of a program. In this manner, Congress may use its appropriation power to accomplish policy objectives and to establish priorities among federal programs. It is then the obligation of the agencies under Presidential supervision to ensure that these policy objectives and priorities are faithfully executed. Historically, the term “earmark” has described legislative language that designates a specified amount of a larger appropriation as available only for a particular object. The term earmark derives from ancient England where English farmers would mark the ears of their swine, oxen, and other livestock to cull them from the village herd and demonstrate ownership. In common usage, however, the term earmark soon developed a broader meaning. There are many definitions of earmarks. For example, our Glossary of Terms Used in the Federal Budget Process defines earmarking as either of the following: 1. Dedicating collections by law for a specific purpose or program. Earmarked collections include trust fund receipts, special fund receipt accounts, intragovernmental receipts, and offsetting collections credited to appropriation accounts. These collections may be classified as budget receipts, proprietary receipts, or reimbursements to appropriations. 2. Designating any portion of a lump-sum amount for particular purposes by means of legislative language. Sometimes “earmarking” is colloquially used to characterize directions included in congressional committee reports but not in the legislation itself. “There is not a single definition of the term earmark accepted by all practitioners and observers of the appropriations process, nor is there a standard earmark practice across all 13 appropriation bills. According to Congressional Quarterly’s American Congressional Dictionary, under the broadest definition ‘virtually every appropriation is earmarked.’ In practice, however, earmarks are generally defined more narrowly, often reflecting procedures established over time that may differ from one appropriation bill to another. For one bill, an earmark may refer to a certain level of specificity within an account. For other bills, an earmark may refer to funds set aside within an account for individual projects, locations, or institutions (emphasis added).” In recent years there has been a significant amount of public discussion about the nature and number of earmarks, with exponential growth reported in the number and amounts. For example, researchers at the Brookings Institution, on the basis of data compiled by CRS, cited dramatic growth in earmarks between 1994 and fiscal year 2006. In fact, CRS data show increases in the number and amount for individual appropriation bills during that period. Any discussion of trends, however, is complicated by the fact that different definitions of the term earmarks exist and that the amounts reported vary depending on the definition used. Although CRS has totaled the number and amount of earmarked spending for each of the regular annual spending bills enacted since fiscal year 1994, CRS has cautioned that the data presented for the 13 appropriations cannot be combined into a governmentwide total because of the different definitions and methodologies that were used for each bill. These differing definitions would make any total invalid. Any definition of the term earmark requires a reference to two other terms in appropriations law—lump-sum appropriations and line-item appropriations. A lump-sum appropriation is one that is made to cover a number of programs, projects, or items. Our publication, Principles of Federal Appropriations Law (also known as the Red Book), notes that GAO’s appropriations case law defines earmarks as “actions where Congress . . . designates part of a more general lump-sum appropriation for a particular object, as either a maximum, a minimum, or both.” Today, Congress gives federal agencies flexibility and discretion to spend among many different programs, projects, and activities financed by one lump-sum appropriation. For example, in fiscal year 2007, Congress appropriated a lump-sum appropriation of $22,397,581,000 for all Army Operations and Maintenance expenses. Many smaller agencies receive only a single appropriation, usually termed Salaries and Expenses or Operating Expenses. All of the agency’s operations must be funded from this single appropriation. A line-item appropriation is generally considered to be an appropriation of a smaller scope, for specific programs, projects, and activities. In this sense, the difference between a lump-sum appropriation and a line-item appropriation is a relative concept hinging on the specificity of the appropriation. Also, unlike an earmark, a line item is typically separate from the larger appropriation. As noted above, in earlier times when the federal government was much smaller and federal programs were (or at least seemed) less complicated, line-item appropriations were more common. For example, among the items for which Congress appropriated funds for 1853 were separate appropriations to the Army, including: $203,180.83 for clothing, camp and garrison equipage, and horse equipment; $4,500 for fuel and quarters for officers serving on the coast survey; and $400,000 for construction and repair. “Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled, That there be appropriated for the service of the present year, to be paid out of the monies which arise, either from the requisitions heretofore made upon the several states, or from the duties on impost and tonnage, the following sums, viz. A sum not exceeding two hundred and sixteen thousand dollars for defraying the expenses of the civil list, under the late and present government; a sum not exceeding one hundred and thirty-seven thousand dollars for defraying the expenses of the department of war; a sum not exceeding one hundred and ninety thousand dollars for discharging the warrants issued by the late board of treasury, and remaining unsatisfied; and a sum not exceeding ninety- six thousand dollars for paying the pensions to invalids.” From today’s perspective, some might say that this first appropriation contains several lump-sum appropriations. Others might say that these are line-item appropriations for (1) civil servants, (2) department of war, (3) treasury, and (4) pension payments. In any event, these are congressional directives instructing the executive branch on how funds are to be spent. As discussed earlier, this illustrates the definitional difficulties in this area. The second appropriation made by the First Congress for 1791 contained a congressional directive to spend “a sum not exceeding fifty thousand seven hundred and fifty-six dollars and fifty-three cents,” for several specific objects requested by Secretary of the Treasury Alexander Hamilton in his budget estimates, such as converting the Beacon of Georgia into a lighthouse and for the purchase of hydrometers. Today, congressional committees sometimes insert spending directives and restrictions on the use of appropriated funds in what is known as the legislative history of an act—that is, House, Senate, and conference reports accompanying a piece of legislation. As a matter of law, instructions in committee reports and other legislative history as to how funds should or are expected to be spent do not impose any legal requirements on federal agencies. Only directions that are specified in the law itself are legally binding. This does not mean agencies are free to ignore clearly expressed legislative history applicable to the use of appropriated funds. In a 1975 decision, we pointed out that agencies ignore such expressions of intent at the peril of strained relations with committees and that agencies have a practical obligation to abide by such expressions. This obligation, however, must be understood to fall short of a legal requirement giving rise to a legal infraction where there is a failure to carry out that obligation. In that decision, we pointed out that Congress has recognized that it is desirable to maintain executive flexibility to shift funds within a particular lump-sum appropriation account so that agencies can make necessary adjustments for unforeseen developments, changing requirements, and legislation enacted subsequent to appropriations. This is not to say that Congress does not expect that funds will be spent in accordance with budget estimates or in accordance with restrictions or directions detailed in committee reports. However, in order to preserve spending flexibility, it may choose not to impose these particular restrictions as a matter of law, but rather to leave it to the agencies to “keep faith” with Congress. “Congress may always circumscribe agency discretion to allocate resources by putting restrictions in the operative statutes (though not . . . just in the legislative history). And of course, we hardly need to note that an agency’s decision to ignore congressional expectations may expose it to grave political consequences.” There have been numerous calls in and out of Congress for earmark reform. Both Houses of Congress have taken steps to increase disclosure requirements. In January 2007, the President proposed “earmark reforms” in his State of the Union address. These included cutting the number and amount of earmarks by at least half. According to the Office of Management and Budget (OMB), in fiscal year 2005, there were 13,492 earmarks totaling $18,938,657,000 for appropriations accounts. “Earmarks are funds provided by the Congress for projects or programs where the congressional direction (in bill or report language) circumvents the merit-based or competitive allocation process, or specifies the location or recipient, or otherwise curtails the ability of the Administration to control critical aspects of the funds allocation process.” OMB asked agencies to provide earmark information encompassed in all enacted appropriations bills in fiscal year 2005 and in any congressional reports. The guidance to agencies also directed prioritization of data collection to focus first on appropriations bills, since legislative action on those bills typically begins in the spring. In addition, OMB directed agencies to plan on providing information on earmarks in authorizing and other bills that are identified based on consultation with OMB. OMB’s guidance to agencies excludes from its definition of earmarks funds requested in the President’s Budget. OMB posted these data on its Web site and also asked agencies to identify earmarks in fiscal year 2008 appropriations bills as they moved through the legislative process. This request for data asked the heads of departments and agencies to report to OMB the number and dollar value of earmarks in each account within 7 days after an appropriations bill is reported by the House or Senate Appropriations Committee or passes on the House or Senate floor. The Department of Defense (DOD) is responsible for the military forces needed to deter war and protect the security of the United States. The major elements of these forces are the Army, Navy, Air Force, and Marine Corps. DOD includes the Office of the Secretary of Defense (OSD), the Chairman of the Joint Chiefs of Staff, three military departments, nine unified combatant commands, the DOD Inspector General, 15 defense agencies, and 7 DOD field activities. We focused on OSD’s Comptroller; the military services (Army, Navy, Marine Corps, and Air Force); two defense agencies, the Defense Information Systems Agency (DISA) and the Defense Threat Reduction Agency (DTRA); and one combatant command, the U.S. Special Operations Command (SOCOM). DOD has had a procedure in place for many years that identifies and categorizes all congressional directives—which it calls add-ons or items of congressional interest—for programs and projects contained in the bill language included in the appropriations conference report. DOD does not include items in defense authorization bills in its list of add-ons. According to DOD officials, DOD defines an add-on as an increase in funding levels in the bill language included in the appropriations conference report that was not originally requested in the President’s Budget submission. appropriate funds for execution of program directives. DOD follows defense appropriation bills to determine how to execute program directives. Six additional types of add-ons to be excluded were funding for the Global War on Terror, funding for the National Guard and Reserve Equipment (97-0350) appropriations account for procuring equipment, funding for military personnel, funding for peer-reviewed Defense health programs, policy decisions for which DOD submitted its budget request with the best estimate available at the time but for which Congress subsequently adjusted the budget request due to refined estimates provided to it, and items that are being transferred to other accounts that result in a net zero change to DOD’s overall budget. DOD officials provided their rationale for excluding these types of add-ons for fiscal year 2005. According to DOD officials, the funding for the Global War on Terror is specific to providing support to the troops for ongoing combat operations and related activities. In fiscal year 2005, the Global War on Terror was funded primarily through supplemental appropriations rather than through the DOD base budget request. DOD officials stated that the National Guard and Reserve appropriations account to procure equipment (i.e., account 97-0350) was not an earmark because, although its funding was not requested in the President’s Budget, the funding was routinely provided directly by Congress to maximize readiness of the National Guard and Reserve. Congressional add-ons for military personnel appropriated for basic pay and benefits were excluded because these were routine, merit-based administrative costs. Peer-reviewed Defense health programs were not considered earmarks because they were funded based on merit that was determined by a panel of physicians. Policy decisions for which DOD submitted a budget but did not fully fund procurement of an item were excluded because they were based on a preliminary estimate that required additional funding and were not new items. DOD excluded funds that were transferred to other accounts because the funds needed to be aligned with the correct place in the budget before they could be obligated or expended. DOD officials stated that the list of exclusions is guidance for the components to use as they review the congressional add-ons to determine which funds should not be considered earmarks. Components prepared justifications for each add-on they believed should be excluded based on the exclusion criteria. In addition, officials stated that the criteria are evolving and that they are continuing to work with OMB to refine them. Before OMB’s 2007 guidance, DOD had an established process that it continues to use for identifying congressional directives contained in the bill language of the appropriations conference report. In addition, each component routinely monitors the congressional budget cycle and has its own staff (i.e., legislative liaisons and financial management staff) who work with congressional staff to determine, if necessary, the purposes and objectives of congressional directives. In addition, legislative liaisons are responsible for updating their leadership on the status of congressional directives during House and Senate Appropriations Committee markups, floor debates, and the final conference report. Under the procedure DOD has had in place for years, the OSD Comptroller identifies all congressional directives contained in the bill language from the appropriations conference report, which are categorized by budget accounts and components, and provides the relevant list to the appropriate component. In response to OMB’s 2007 guidance, DOD officials described an additional three-step process they used for identifying and categorizing fiscal year 2005 earmarks: 1. Components reviewed the list of congressional directives identified by the OSD Comptroller and applied the agreed-on exclusion criteria, then developed justifications for any congressional directives they identified as earmarks that met the criteria to be excluded, and then provided the revised list of directives and justifications back to the OSD Comptroller. 2. The OSD Comptroller and OMB jointly determined if any further adjustments needed to be made to the list based on their review of the justification provided by the components. 3. After the list was agreed on, an OSD official created the list that was uploaded to an OMB earmarks site for review. OMB approved the list for release to the public site. Figure 1 describes DOD’s process for identifying and categorizing fiscal year 2005 congressional directives in response to OMB’s 2007 data collection effort. Committee markups, floor debates, and the conference report) within 7 days. In addition, the DOD components will have access to the OMB database and will be required to enter the details about the earmarks, including recipient, location, and amount, as well as data on the execution status of their respective earmarks. OSD Comptroller officials said that they will be responsible for providing oversight of this process and will monitor the Web site to ensure that the components populate the database within the required time frames. DOD does not have a centralized tracking and reporting mechanism that shows to what extent funding has been obligated and expended in accordance with congressional directives. DOD component headquarters staff track the amount of funding provided to them for individual congressional directives. Program offices track the execution of funds for the specific programs covered by the directives but are not required to report the status to the components or to the OSD Comptroller’s office. The OSD Comptroller makes an allotment of funding for the congressional directives to the components, and this funding is tracked by the various components’ financial management systems rather than within a centralized system maintained by OSD. We identified the financial management systems for five of the six components that we interviewed. The sixth, SOCOM, at this time uses the department’s Programming, Budgeting, and Accounting System to facilitate the tracking of congressional directives. The systems described by the five components track all budget allotments and include unique codes or other features that identify funds designated for congressional directives for tracking purposes. The financial management systems used by the five components are as follows: The Army uses the Funds Control System to track funds allotted for various directives. The system issues a funding authorization document to the Army operating agencies responsible for implementing the directives. Army officials identified two steps within the process that allow operating agencies to track congressional directives. The remarks section of the funding authorization document includes a statement that identifies the item as a congressional directive, and resource managers give each item an execution code that further facilitates tracking of such directives. congressional directives in the system. This process allows the system to produce reports on such directives for review by program managers, as needed. The Navy’s financial management system is the Program Budget Information System that tracks congressional directives. These directives are tagged and then monitored during execution. The Washington Allotment Accounting System is the financial accounting system used by DISA that provides information on the funding execution of congressional directives. Funding is monitored at the program level by DISA’s Home Team. According to DISA officials, congressional directives are assigned a project code that is linked to the funding documents, such as contracting vehicles, and that code allows DISA to determine that funding for a directive has been spent. DTRA’s financial accounting system is the Centralized Accounts and Financial Resources Management System. According to DTRA officials, congressional directives are given a work unit code in the accounting system that provides the status of funds for these directives through execution. Furthermore, Navy and Air Force officials provided examples of initiatives intended to streamline the process for tracking the status of congressional directives. According to a Navy official, the Navy’s Enterprise Resource Planning System is part of its ongoing business transformation effort, which, among other improvements, is intended to enhance its capability to track congressional directives. Through this integrated system, the Navy plans to include a code that identifies congressional directives through its accounting system. The Air Force Research Lab has developed a process for tracking congressional directives. The lab set up separate account codes, called Emergency and Special Program Codes, to identify the funding that has been allocated for each directive. According to Air Force officials, they are considering a similar tracking model for Air Force-wide implementation. the funding status for the list of congressional directives. Officials we interviewed from the six components said that once funding has been distributed to the program offices, they do not follow up to determine whether the directives are implemented. OMB’s Web site for fiscal year 2005 earmarks did not provide a means to include the implementation status of individual earmarks. According to DOD officials, DOD has asked OMB to include another field that would show the implementation or completion status of congressional earmarks in OMB’s database to facilitate tracking in the future. This field will require DOD components to update information on the Web site beyond the OSD Comptroller’s initial posting of data. DOD does not have a routine procedure for reporting to Congress on the progress being made on individual directives. According to DOD officials, components respond to individual congressional inquiries regarding the status of individual directives. In addition, the legislative liaison coordinates and oversees DOD responses to congressional inquiries on congressional directives as they are received. We interviewed DOD officials who had responsibility for budgeting, financial management, and legislative issues related to congressional directives from six components. Some of the officials stated that they had only been in their positions for a short time and therefore could not comment on the trends and impact of directives on their budget and programs. However, others provided views on how congressional directives affect budget and program execution. Anecdotally, they offered the following views: According to OSD officials, they have not maintained data on whether the number of congressional directives has increased or decreased over time. However, two military service officials commented that in their view there has been an increase in the number of such directives. Congressional directives are viewed as tasks to be implemented and are opportunities to enhance their mission requirements through additional funding in areas that would not have been priority areas because of budget constraints. Congressional directives can sometimes place restrictions on the ability to retire some programs and to invest in others. Restrictions have an effect on the budget because they require the components to support an activity that was not in the budget. There has always been a feeling that the billions of dollars of congressional directives must come from somewhere, but it is not possible to determine whether any specific directive resulted in reducing funding for another program. Congressional directives could tend to displace “core” programs, which according to a DOD official, are programs for which DOD has requested funding in its budget submission. Additional time and effort are required to manage the increasing number of congressional directives. Program execution of congressional directives is delayed in some cases as efforts are made to identify congressional intent. The process for identifying the purposes and objectives of a congressional directive was significantly streamlined in the fiscal year 2008 defense appropriations bill, and it is now easier to determine the source of a directive. The Department of Energy’s (DOE) mission is to promote energy security and scientific and technological innovation, maintain and secure the nation’s nuclear weapons capability, and ensure the cleanup of the nuclear and hazardous waste from more than 60 years of weapons production. DOE’s nine program offices focus on accomplishing various aspects of this mission. We reviewed documentation and interviewed officials in the Office of Budget, which is within the Office of the Chief Financial Officer, and four DOE program offices: the National Nuclear Security Administration (NNSA), Office of Science, Office of Energy Efficiency and Renewable Energy (EERE), and Office of Electricity Delivery and Energy Reliability. Since 2005 DOE has generally defined congressional directives, which it refers to as earmarks, as funding designated for projects in an appropriations act or accompanying conference or committee reports that are not requested in the President’s Budget. These congressional directives specify the recipient, the recipient’s location, and the dollar amount of the award and are awarded without competition. DOE officials said that this definition does not include money appropriated over and above the department’s budget request (also known as “plus ups”) or program direction contained in the act or report language because the department can still develop projects and compete them in following this direction. However, before fiscal year 2005 some DOE program offices considered program direction in committee reports, such as language requesting more research in a certain area, to be earmarks. Officials from DOE’s Office of Budget and program offices separately review the appropriations act and accompanying conference and committee reports to identify and categorize congressional directives by program office. These processes are not recorded in written policy but have generally been in place since fiscal year 2005, according to DOE officials. Once the staff of the Office of Budget and each program office develop their lists, they work together to reconcile any differing interpretations of the act and report language to produce a single list. Program office staff make the final determination on whether a particular provision should be considered a congressional directive. During the course of the fiscal year, this list may change as the Office of Budget or a program office learns more about the intent of the appropriations committee responsible for the direction. The process for identifying and categorizing congressional directives has changed somewhat since OMB issued instructions on earmarks in 2007. According to DOE Office of Budget officials, OMB’s January 2007 definition of earmarks differed from DOE’s definition, and applying OMB’s definition somewhat increased the number of earmarks the department reported to OMB for fiscal year 2005. For example, DOE budget officials said that OMB’s definition of earmarks includes money specified for a particular DOE laboratory, while DOE’s definition does not because DOE maintains some level of control over project objectives and outcomes at these laboratories. These budget officials also said that DOE is planning to adopt OMB’s definition beginning in fiscal year 2008 to identify earmarks to make this process of developing a list of earmarks more uniform. research or demonstration projects under the Energy Policy Act of 2005. They also prepare a Determination for Non-Competitive Financial Assistance to explain why the award will not be competed—a document that requires approval by the relevant program Assistant Secretary. Once these paperwork requirements have been met and a financial assistance agreement (grant or cooperative agreement) is awarded, the recipient can begin withdrawing funds from an account set up for the project or submit requests for reimbursements. During the course of the project, the recipient must submit progress reports and a final report to program officials. Contract management staff in each of the four program offices use administrative databases to track each of their projects, including congressional directives. They use these databases to help manage workload for project officers and to keep track of documentation sent to and received from recipients. Specifically, EERE tracks each of its congressional directives through an Internet-based database. The other three DOE program offices maintain separate, less formal spreadsheets on the congressional directives for their specific programs. These spreadsheets contain background information, such as the project’s purpose, dollar amount, and recipient. These spreadsheets are not part of a larger DOE tracking system. In addition, the program offices do not prepare regular reports on congressional directives and generally only follow up on the status of a particular congressional directive if they receive an inquiry from the appropriations committee. DOE Budget Office officials told us that the departmentwide accounting system, the Standard Accounting and Reporting System, cannot generate reports specifically on congressional directives for the department. This is because DOE’s program offices differ in the way they assign accounting codes to congressional directives. For example, while EERE assigns an individual accounting code to each directive, NNSA generally does not. recipient of congressional directives in prior years that sought continued funding in fiscal year 2007 to submit an application for a formal merit review by the department because (1) the resolution directed all federal departments (including DOE) to disregard fiscal year 2006 congressional directives, cutting off funding for any multiyear directives from previous years, and (2) no committee reports, which are the primary source of the department’s congressional directives, accompanied the continuing resolution. As a result of this policy, program officials from the Office of Science told us that they received few applications for continued funding in fiscal year 2007. The department funded substantially fewer congressional directives compared to previous years. DOE officials stated that through fiscal year 2006 the number of congressional directives had increased, and that this growth limited the ability of certain program offices to develop and implement their strategic goals. DOE officials said that the number of congressional directives began a steady rise in the late 1990s that continued through fiscal year 2006. As noted earlier, they said that because of the continuing resolution there were far fewer projects in fiscal year 2007 that were associated with congressional directives. In terms of the types of congressional directives awarded since the late 1990s, DOE officials from two program offices said that there were “hot topics” that garnered attention at certain times. For example, an official from EERE—which had the highest dollar value of congressional directives among DOE program offices—told us that there were directives in recent years to fund fuel cell research at specific facilities. DOE program officials reported that implementing congressional directives imposed a high administrative burden. For example, many officials reported that it takes longer to process and award congressional directives because DOE personnel need to educate some recipients on DOE’s processes, such as how to submit an application and comply with DOE’s reporting requirements and the applicability of cost-sharing requirements. To help address this issue, EERE invites all recipients of congressional directives to a presentation at DOE headquarters for an overview of the process. EERE and the Office of Electricity Delivery and Energy Reliability said that they were not appropriated additional dollars to fund congressional directives. These program officials told us that their ability to accomplish their strategic goals has been limited because congressional directives make up a large percentage of their budget and it is often difficult to align the outcomes of congressional directives with these goals. The Department of Transportation (DOT) implements and administers most federal transportation policies through its 10 operating administrations. These operating administrations are generally organized by mode and include highways and transit. The operating administrations are responsible for independently managing their programs and budgets to carry out their goals as well as those of the department. As such, DOT has delegated the responsibility for identifying, categorizing, tracking, and reporting on congressional directives to its operating administrations. The Federal Highway Administration (FHWA) is responsible for the highway program, and the Federal Transit Administration (FTA) is responsible for the transit program. While FHWA and FTA carry out some activities directly, they, like many other DOT operating administrations, do not have direct control over the vast majority of the activities they fund through grants, such as constructing transportation projects. The recipients of transportation funds, such as state departments of transportation, are responsible for implementing most transportation programs and congressional directives. The federal highway and transit programs are typically funded through multiyear authorization acts, such as the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) and its predecessor, the Transportation Equity Act for the 21st Century (TEA-21). These authorization acts, which are subject to the annual appropriations process, set the upper funding limit for the federal highway and transit programs. Both the authorization and appropriations acts contain congressional directives for the federal highway and transit programs. See figure 2 for additional information on the mission and organizational structure of FHWA and FTA. Section 4: Department of Transportation: Highways and Transit FHWA provides financial and technical support to states and localities for constructing, improving, and preserving the national highway system through its headquarters office and 52 federal-aid division offices (division offices). Division offices are located in every state, as well as the District of Columbia and Puerto Rico, and provide front-line delivery assistance in the areas of highway transportation and safety services. FTA supports locally planned and operated mass transit systems throughout the United States through its headquarters office and 10 regional offices. The regional offices work with local transit officials in developing and processing grant allocations, providing technical assistance, and monitoring projects. DOT’s definition of congressional directives, which it refers to as earmarks, has remained generally consistent over a number of years and mirrors OMB’s definition. Although DOT has not issued guidance on the definition of earmarks to its operating administrations, DOT officials said that they expect the operating administrations to follow OMB’s definition. Although FHWA’s and FTA’s definitions are generally consistent with OMB’s definition, there are a few differences, specifically: FHWA defines earmarks as designations that specify a recipient, purpose, and total funding amount. FHWA officials told us that they consider their definition narrower in scope than OMB’s definition because OMB does not require an earmark to contain all three elements (i.e., recipient, purpose, and total funding amount). FHWA distinguishes between statutory designations that are binding on the agency and nonstatutory designations identified in congressional reports that are not binding on the agency. FHWA officials did not change their definition of earmarks after the release of OMB’s guidance in 2007. FHWA officials told us that they honored fiscal year 2007 statutory designations and handled nonstatutory designations in accordance with the OMB guidance. definition would capture New Starts projects, which are typically designated in both the President’s Budget and legislation. OMB’s definition would not capture the New Starts projects, if the projects and funding levels designated by Congress match the projects and funding levels designated in the President’s Budget. FTA officials did not change their definition of earmarks after the release of OMB’s guidance in 2007. DOT has delegated the responsibility for identifying and categorizing congressional directives to its operating administrations. FHWA has further delegated the responsibility for identifying and categorizing congressional directives to its program offices. For example, the Office of Infrastructure is responsible for identifying congressional directives in the High Priority Projects program—which falls under this office’s purview. When identifying congressional directives, FHWA categorizes them as statutory, nonstatutory, or hybrids. apportionments and allocations. Both FHWA and FTA officials told us that they comply with nonstatutory congressional directives that meet eligibility requirements to the extent possible—although they are not required to do so. FHWA uses an electronic system to track congressional directives. FHWA’s Office of the Chief Financial Officer and program offices collaborate to track most congressional directives. Staff in FHWA’s Office of the Chief Financial Officer enter projects into the tracking system after receiving requests from program offices for project identification numbers. Once congressional directives are entered into the system, they are not tracked separately from other projects, such as those funded by formula. The program offices then send memorandums to FHWA division offices to notify them of the total amount of funds available for each project. Officials from FHWA division offices and state departments of transportation with whom we spoke have access to FHWA’s system and may also track congressional directives using their own systems. Officials in these offices said that they also maintain their own tracking systems to improve their staff’s and the public’s access to data and to corroborate data in the federal tracking system. was added to the system in 2006, in part, to track what they described as the growing number of congressional directives. FHWA and FTA do not typically implement congressionally directed projects. Rather, they provide funds through grants, and state and local agencies generally implement the highway and transit congressional directives in carrying out their programs. Specifically, FHWA division offices and FTA regional offices administer and obligate funds for projects, including congressionally directed projects, to grant recipients and respond to questions from recipients on issues related to eligibility and transferability, among other things. In turn, the grant recipients implement congressional directives. Figure 3 illustrates the processes used by FHWA and FTA to identify, track, and implement congressional directives. “clarification letters” that are periodically sent to DOT from congressional committees. These letters are jointly signed by the House and Senate appropriations subcommittees and provide clarification on how Congress would like to see directed funds used. DOT provides the responsible operating administrations, such as FHWA or FTA, with these letters and coordinates responses on whether the operating administration can comply with the request. In addition to responding to specific requests from congressional committees, DOT also communicates some general funding information on congressional directives to Congress. For example, as required by law, DOT notifies the relevant House and Senate Committees prior to announcing a discretionary grant, letter of intent, or full funding grant agreement totaling $1 million or more. In addition, FTA reports to Congress at the end of each fiscal year on all projects with unobligated funds that have reached the end of their availability period. FHWA officials, as well as officials from the state departments of transportation with whom we spoke, stated that the number and value of directives, notably high-priority projects, increased substantially from TEA-21 (1998 to 2003) to SAFETEA-LU (2005 to 2009). FHWA officials provided documentation that showed that the number of High Priority Projects listed in SAFETEA-LU was almost triple that of the number of projects listed in TEA-21. FTA officials also stated that the number and value of authorization and appropriations directives in transit programs increased between TEA-21 and SAFETEA-LU. SAFETEA-LU. FHWA officials further noted that congressional directives can be inconsistent with states’ transportation priorities, particularly if the congressional directives are for projects outside of their statewide transportation programs. Officials from one state department of transportation noted that although many congressional directives in SAFETEA-LU were requested by the state, about one-third of the congressional directives did not have statewide benefits or serve an eligible highway purpose. A senior FTA official also noted that congressional directives may result in the displacement of projects that FTA views as being a higher priority and ready for implementation with projects that are a lower priority for FTA. For example, some New Starts congressional directives provide funding for projects that are not yet ready for implementation, delaying the implementation of FTA’s higher-priority projects that are scheduled to receive federal appropriations. FTA officials said that roughly 85 to 90 percent of the congressional directives received in the New Starts program are for projects that FTA has recommended for funding in its budget. One FTA official also acknowledged that some congressional directives provide funding for projects that FTA has identified as priorities in its research program and were included in the President’s Budget, although the majority of directives were not requested and displaced research activities FTA identified as being of higher priority. funding in a merit-based selection process. FTA officials also told us that congressional directives sometimes provide funding for projects that would otherwise be considered ineligible, such as directives to construct parking garages with transit funding. Officials from FHWA division offices and FTA noted that in some cases, the language of congressional directives makes it difficult to implement projects. For example, an official from one FHWA division office noted that some congressional directives for the state contained language that was either too specific and was therefore inconsistent with the purposes and objectives of the local sponsor or contained language that made the project ineligible because it did not meet certain federal regulations. According to agency officials, in these cases, a technical corrections bill must be passed before the projects can be implemented, delaying implementation of the projects. Officials we spoke with from three state departments of transportation also noted that inflexibilities in the use of congressionally directed funds limit the states’ ability to implement projects and efficiently use transportation funds by, for example, providing funding for projects that are not yet ready for implementation or providing insufficient funds to complete particular projects. An official from one state department of transportation noted that although congressional directives can create administrative challenges, they often represent funding that the state may not have otherwise received. FHWA and FTA officials noted that the growth in the number of congressional directives has increased the time and staff resources needed to identify and track projects. For example, FHWA officials noted that relative to their proportion of the budget, they devote a higher percentage of time to administering congressional directives than other projects. Similarly, officials from FHWA division offices stated that they spend a substantial amount of time working with the state to determine whether projects meet federal eligibility requirements, respond to questions of transferability, and provide assistance to the state for projects that were not included in their state transportation plan. FTA officials noted that some recipients of a congressional directive are unaware of the directive and may decide to use the grant for another purpose, making it difficult to obligate funds within the 3-year availability period. Through its Civil Works programs, the U.S. Army Corps of Engineers (Corps) investigates, develops, and maintains water and related environmental resources throughout the country to meet the agency’s navigation, flood control, and ecosystem restoration missions. Headquartered in Washington, D.C., the Corps has eight regional divisions and 38 districts that carry out its domestic civil works responsibilities. Figure 4 shows the Corps’ divisions and districts. The Corps has identified congressional directives for many years for project implementation purposes. The Corps has used the term adds to identify some congressionally directed projects. According to Corps budget officials, congressional directives are defined by the agency as any of the following changes to requests made in the President’s Budget: an increase or decrease in funding levels for a budgeted project, the funding of a project that was not included in the President’s Budget, and any project that has language in a committee or conference report or in statute that restricts or directs the Corps on how to spend funds. Corps officials told us that this definition is consistent with the definition of earmarks in OMB’s 2007 guidance, except that an earmark is a restriction or specification on the use of funds, while a congressional directive can be simply an increase or decrease in funding for a budgeted project. For project implementation purposes, the Corps has continued to identify congressional directives in the same manner as it did before OMB issued its guidance. that a separate effort was needed because (1) OMB required information that was not available from the Corps’ normal process for identifying congressional directives and (2) the Corps had only a short time to respond to the request. The program manager responsible for responding to OMB identified the fiscal year 2005 earmarks using appropriations bills and conference reports. To complete the OMB request, the program manager supplemented this information with some project-level details, such as the name of the nonfederal sponsor, which the manager obtained from the relevant districts, according to Corps officials. These officials also said that the results of the program manager’s work were reviewed by Corps managers before the information was submitted to OMB. The Corps identifies all congressional directives included in appropriations statutes, bills, and related conference reports each year and routinely makes this information available to its headquarters and division and district staff, according to Corps officials. With the assistance of the district offices, officials in each of the Corps’ divisions develop spreadsheets identifying the congressional directives in their region by examining the language in appropriations committee reports, the conference report, and the appropriations statute and comparing this language to the President’s Budget. According to Corps budget officials, most congressional directives receive no special attention because they are generally categorized as being in compliance with the Administration’s budget policy and the Corps’ policy (i.e., increased funding provided to projects included in the President’s Budget). not provide the nonfederal sponsor with credit for work completed before the nonfederal sponsor enters into an agreement with the Corps. For the congressional directives that require additional discussion on how the Corps will implement the projects, the divisions prepare fact sheets. Table 1 shows the various types of information provided with each fact sheet. submits all prepared fact sheets with the recommended implementation plans to Corps headquarters and the Office of the Assistant Secretary of the Army for Civil Works for their review. Each division then has a teleconference with these headquarters officials to discuss and approve the plans. Most implementation plans are completed at this stage. For the fact sheets with unresolved issues, each division holds a videoconference with officials from headquarters and the Assistant Secretary’s office. Attendees for each videoconference include senior executives from the Corps and the Office of the Deputy Assistant Secretary of the Army for Management and Budget. After this videoconference, each division incorporates changes to its implementation plan and resubmits it for final approval by headquarters and the Assistant Secretary. Corps headquarters releases the associated funding for all projects to the districts immediately after the agency receives its appropriation. Corps officials said that while the implementation plans are being discussed for projects with unresolved issues, the districts may obligate funds for certain activities that do not conflict with Administration budget policy or Corps policy. Once the implementation plans are completed, the districts will continue to execute remaining aspects of the plans. However, according to a Corps official, there are a few instances in which the Corps does not execute the project. These instances may occur, for example, when (1) funds are appropriated for the project, although funds had not previously been authorized; (2) the project was authorized, but the authorized spending limit had already been reached; or (3) the Corps was directed to continue a feasibility study, but the agency found that the least costly alternative was to relocate the affected facilities and the local sponsor was not interested in continuing the study. In such situations, the districts are generally responsible for informing individual Members of Congress about the decisions affecting their respective jurisdictions, and Corps headquarters notifies the relevant congressional committees. According to Corps officials, the Corps does not have a separate approach for tracking, implementing, and reporting on projects generated from congressional directives. Instead, all projects are managed in the same manner for tracking, implementation, and reporting purposes. The procedures are detailed in a manual that establishes the Corps’ project management practices. For example, all Corps projects require a written project management plan that details how the project will be accomplished. A Corps official stated that the process does not include a distinct method for reporting on the status of directives to Congress or any of its committees or members. The Corps does not analyze trends in congressional directives, and there was no consensus among the officials we spoke with on trends in the number of these directives. While some Corps officials told us that they believe the overall number of congressional directives has remained at about the same level for the last decade, another Corps official told us that he believes the number of congressional directives has increased throughout the decade. This official stated that in recent years Congress has added a number of projects that the Corps labels as “environmental infrastructure projects” that are outside the scope of the Corps’ historic missions. Those projects included building sewage treatment plants and water supply facilities, revitalizing local waterfronts, and maintaining waterways primarily for local recreation. The Chief of the Programs Integration Division, who is responsible for the Civil Works budget, estimated that these types of congressional directives are a small portion of the Corps’ Civil Works program budget. If the Corps categorizes a congressional directive as being inconsistent with the Administration or Corps policy, the Corps will not budget for the project in subsequent fiscal years. Officials said that they believe this could potentially increase the Corps’ backlog of incomplete projects. Congressional directives are more difficult to plan and schedule for execution in advance compared with projects included in the President’s Budget. Officials said that this is because it is more difficult to develop an accurate project timeline because of the greater uncertainty about future funding levels for these projects. Congressional directives may make it more difficult for the Corps to predict and manage full-time equivalent (FTE) levels and allocations from year to year. Even though congressional directives increase the Corps’ budget authority, the Corps generally establishes FTE levels using the President’s Budget much earlier in the year. Because the number and regional focus of congressional directives can change from year to year, the Corps faces some uncertainty about whether it will have adequate staff in the right locations to manage the project workload of each district in response to the changing nature of the congressional directives. Our objectives were to identify for selected agencies (1) the process for identifying and categorizing congressional directives; (2) the process for tracking, implementing, and reporting on congressional directives; and (3) agency officials’ views on the trends and impact of congressional directives. The selected agencies were the Department of Defense (DOD), the Department of Energy (DOE), the Department of Transportation (DOT), and the U.S. Army Corps of Engineers’ Civil Works programs (Corps). These agencies cover a range of characteristics concerning congressional directives, including the number of congressional directives. DOD received the largest number of reported congressional directives and made up 55 percent of discretionary appropriations for fiscal year 2006. We focused our review on the relationship between the Office of the Secretary of Defense’s Comptroller and the components (i.e., military services, defense agencies, and combatant commands) and how the components internally process and account for congressional directives. Specifically, we focused on the Army, Navy, Marine Corps, and Air Force; the Defense Information Systems Agency and the Defense Threat Reduction Agency; and the U.S. Special Operations Command. DOE generally receives congressional directives in reports that accompany annual appropriations acts. Congressional directives are spread across DOE’s programs, with some programs reporting that congressional directives make up a large portion of their budgets. We focused our review on the following program offices that oversee the majority of DOE’s congressional directives: the National Nuclear Security Administration (NNSA), the Office of Energy Efficiency and Renewable Energy (EERE), the Office of Electricity Delivery and Energy Reliability, and the Office of Science. DOT receives congressional directives contained in multiyear transportation authorization acts. We focused our review on the surface transportation programs administered by the Federal Highway Administration (FHWA) and Federal Transit Administration (FTA) because of the level of funding authorized in the current surface transportation authorizing legislation, the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), and the number of congressional directives contained in this legislation for these programs. The Corps’ Civil Works programs maintain a wide range of water resources projects, including flood protection, navigation, or other water- related infrastructure. Under some definitions of directives, the Corps’ appropriations could be characterized as consisting largely of directives. We assessed the reliability of the agencies’ data on congressional directives tracking by speaking with knowledgeable officials using a common set of questions about their past and current definitions of congressional directives for purposes of identifying and tracking such directives. We learned that the definitions—both across and, sometimes, within agencies—were not consistent. Therefore the data cannot be used for making comparisons across agencies or showing trends over time, nor can the data from different agencies be aggregated. This review provides information on the processes described to us by officials at the selected agencies. The information provided is not generalizable beyond the four agencies. In addition, we did not evaluate the agencies’ processes for compliance with the Office of Management and Budget’s (OMB) guidance on earmarks, memorandum M-07-09. To identify the selected agencies’ processes for identifying and categorizing congressional directives, we first had to determine how they identified directives (i.e., how they defined them) as well as whether the definition changed after the January 25, 2007, issuance of the OMB guidance. We determined the extent to which the agencies had established processes for identifying and categorizing congressional directives (e.g., by organization, program, location, statute or report, type of directive, or type of impact). To do so, we reviewed the selected agencies’ policies and guidance for identifying and categorizing congressional directives, including the source of these directives before fiscal year 2007 (e.g., statute or conference report). We also interviewed knowledgeable agency officials in budget, program, and congressional affairs offices. At DOD, we interviewed officials from the Office of the Secretary of Defense Comptroller’s office and budget officials from components to obtain information on how congressional directives are implemented as well as to obtain their views on the impact of congressional directives on their budget and program execution. We also interviewed officials responsible for legislative affairs who support budget officials in determining congressional intent of congressional directives. At DOE, we spoke with officials from NNSA, the Office of Science, EERE, the Office of Electricity Delivery and Energy Reliability, and the Office of Budget in the Office of the Chief Financial Officer. We also spoke with officials from some of the site offices that help the program offices implement and track congressional directives. At DOT, we spoke with officials from the Office of the Secretary, FHWA, and FTA. Because implementation is handled at the division and state levels, we also interviewed officials from FHWA division offices and state departments of transportation in Alaska, Florida, and Maine. We selected the division offices and states to interview based on the number of congressional directives in SAFETEA-LU as well as the level of oversight and involvement of those division offices and states in the administration of congressional directives. At the Corps, we spoke with the Chief of the Programs Integration Division, who is responsible for the Civil Works budget, and other officials responsible for identifying earmarks for OMB and congressional directives for the Corps’ routine management process. To identify the selected agencies’ processes for tracking, implementing, and reporting on congressional directives, we reviewed agency documents related to available data or databases used for tracking and reporting on congressional directives. We also reviewed agency guidance or written protocols to demonstrate actions taken to implement congressional directives. In addition, we also interviewed the relevant agency officials from the units of the selected agencies we previously discussed. To obtain their views on the trends and impact of congressional directives on agency programs, we spoke with knowledgeable agency officials from the selected agencies using similar questions. Because we assessed agencies’ data on congressional directives to be insufficiently reliable for the purposes of comparing across agencies and showing trends over time, we could not analyze trend data.
In recent years, congressional concern and public debate have increased about the nature and growing number of earmarks. This report seeks to provide Congress and the public with an understanding of how agencies respond to congressional funding directions by examining how selected executive branch agencies translate these directions from Congress into governmental activities. There have been numerous calls in and out of Congress for earmark reform in response to concerns about the nature and number of earmarks. Both Houses of Congress have taken steps to increase disclosure requirements. The President has also called for earmark reform. In January 2007, the Office of Management and Budget (OMB) directed agencies to collect and submit data to it on fiscal year 2005 earmarks in appropriations bills and certain authorization bills. GAO collected and analyzed information on four agencies' processes (i.e., the Department of Defense, Department of Energy, Department of Transportation, and U.S. Army Corps of Engineers' Civil Works programs). Our objectives were to identify, for these agencies, (1) their processes for identifying and categorizing congressional directives; (2) their processes for tracking, implementing, and reporting on congressional directives; and (3) agency officials' views on the trends and impact of congressional directives. Congress or its committees may use formal vehicles to provide written funding instructions for agencies or to express preferences to agencies on the use of funding. These formal vehicles include statutes (i.e., authorization or appropriations acts) or House, Senate, and conference reports comprising significant parts of the legislative history for a given statute. Often referred to as "earmarks," these written instructions range from broad directions on policy priorities to specific instructions. The U.S. Constitution gives Congress the power to levy taxes, to finance government operations through appropriations, and to prescribe the conditions governing the use of those appropriations. This power is referred to generally as the congressional "power of the purse" and derives from various provisions of the Constitution. Government agencies may not draw money out of the Treasury to fund operations unless Congress has appropriated the money. At its most basic level, this means that it is up to Congress to decide whether to provide funds for a particular program or activity and to fix the level of that funding. It is also well established that Congress can, within constitutional limits, determine the terms and conditions under which an appropriation may be used. In this manner, Congress may use its appropriation power to accomplish policy objectives and to establish priorities among federal programs. Our review of four federal agencies' processes for responding to written directives from Congress regarding the use of funds found that each of the selected agencies responds to congressional directives in a manner consistent with the nature of its programs and operations and in response to the desires of its own authorizing and appropriations committees in Congress. Agencies differ in terms of the specific processes followed to respond to congressional directives, and they have also adopted their own approaches for responding to the 2007 request for data on earmarks from OMB. OMB's guidance to agencies excludes from its definition of earmarks funds requested in the President's Budget. With a few exceptions, officials representing the selected agencies generally expressed the view that the number of congressional directives had increased over time. Agency officials provided a range of views on the impact of congressional directives on budget and program execution. Some agency officials said that congressional directives had a limited impact on their mission requirements or ability to accomplish their goals. Other agency officials reported that implementation of these directives can displace agencies' program priorities as the agencies redirect resources to comply with these directives. Some told us that congressional directives provided money for projects they wanted but had been unable to get funded through budget requests. Agency officials also reported that directives can add uncertainty as agencies respond to congressional priorities identified months later than their planning for items in the President's Budget
As economic activities become more integrated and globalized, foreign trade has become increasingly important to the U.S. economy.According to DOT, recent projections indicate foreign trade may reach 35 percent of the U.S. gross domestic product (GDP) in 2020 and potentially grow to 60 percent of GDP by 2030. As the types of goods exported from and imported to the United States vary greatly, the specific type of cargo can determine the mode of shipment. For example, cargo such as grains, coal, ore, and cement typically ship by dry bulk carrier, oil and gas by tanker, while other commodities such as apparel and appliances ship via containership.billion tons of commodities in 2010. According to Corps data, U.S. ports handled a total of 2.3 Most types of cargo (including agricultural goods such as grains) are increasingly being moved by containership—ships that carry cargo in containers measured in twenty-foot equivalent units (TEU). ports handled $474 billion in containerized imports and $177 billion in containerized exports. In addition, shippers are increasingly using larger ships to gain transportation efficiencies and cost savings in a competitive market. For example, in 2000, the average containership carried 2,900 TEUs; in 2012, the average containership carried 6,100 TEUs. According to DOT, the number of port calls to the United States by very large post- Panamax containerships carrying 5,200 TEUs or greater increased 156 percent (from 1,700 to 4,400 port calls) from 2004 to 2009. These vessels are expected to represent 62 percent of total containership capacity in the world by 2030. Consequently, continued trade growth in coming years, as well as the increasing size of containerships calling on U.S. ports, will place even greater demands on the nation’s MTS and necessitate some changes to MTS infrastructure, such as deepening channels to accommodate these larger vessels. In 2009, U.S. TEU or twenty-foot equivalent unit can be used to measure a ship’s cargo carrying capacity. The dimensions of one TEU are equal to that of a standard 20 foot shipping container (20 feet long, 8 feet tall). The MTS is integral to the efficient movement of the nation’s freight. It provides a cost effective means of moving bulk, breakbulk, and containerized cargo to U.S. consumers and to foreign markets through a variety of transportation modes. The MTS includes three primary segments: navigable waterways, ports, and port connectors. There are 25,000 miles of commercially navigable harbors, channels and waterways, 4 million miles of public highways and roads, and over 140,000 miles of national, regional, and local railroad networks in the United States over which trillions of dollars worth of freight move annually. Figure 1 below illustrates these three MTS segments. Navigable Waterways: Navigable waterways include harbors, shipping channels (including both deep and shallow draft), rivers, lakes, and inland waterways, as well as locks, dams, and other navigation structures such as jetties. They provide safe passage for a wide range of shipping vessels including containerships, tankers, bulk carriers, and other vessel types such as inland and oceangoing barges. Inland waterways carry approximately one-sixth of the national volume of intercity cargo on 12,000 miles of commercially active inland and intra-coastal waterways. There are also 13,000 miles of coastal deep and shallow draft harbors and channels that are operated and maintained for commerce. These deep draft harbors and channels provide access to 70 ports, including about 40 ports that have channel depths of 40 feet or more and handle 10 million or more tons of cargo per year. The Water Resources Development Act of 1986 (Pub. L. No. 99-662, § 102, 100 Stat. 4082, 4084 (1986) codified at 33 U.S.C. § 2212) established the cost-sharing ratios. Fifty percent of the cost of construction is to be paid from amounts appropriated from the General Fund of the Treasury, and the other fifty percent is from a fuel tax paid by commercial inland waterway users that is made available through appropriations from the Inland Waterways Trust Fund. Some waterborne vessels are exempt from the fuel tax, including certain oceangoing ships, passenger boats, recreational craft, or government vessels. harbors and channels, which are then reimbursed from revenues from the Harbor Maintenance Trust Fund, which comes largely from an excise tax on imports imposed on commercial users at certain ports. The tax applies a second time to cargo that has already arrived at a U.S. port, but is transferred by barge to another U.S. port. Importers or shippers pay an amount equal to 0.125 percent of the value of the commercial cargo involved at the time of unloading. The Harbor Maintenance Trust Fund balance totaled $6.42 billion at the end of fiscal year 2011. Non-federal sponsors are responsible for a small percentage of operation and maintenance costs for harbors and channels that are deeper than 45 feet. Ports: All ports serve as gateways for the movement of goods between navigable waterways and landside transportation systems, such as the Interstate highway system or the national rail network. For the purposes of this report, we refer to ports as the area “inside the gate” and under the control of the local port authority or private terminal operator, where cargo is loaded and unloaded to and from ships. Ports may be publicly or privately owned and operated, and consist of thousands of large, medium, and small terminals and intermodal facilities in approximately 360 commercial sea and river ports. However, most of the United States’ containerized cargo is handled by a few major ports. For example, in 2009 U.S. ports handled over 206 billion tons of containerized imports and exports, and the top 10 U.S. container ports accounted for 85 percent of the total trade, according to DOT. Port Connectors: Efficient freight movement depends upon the condition of intermodal connections. Port connectors include transportation infrastructure such as roads, railways, and marine highways that connect the port to major trade corridors and allow freight to transfer from one transportation mode to another (e.g., from a ship to a truck).Alameda Corridor, a 20-mile, $2.4 billion railroad express line linking the ports of Los Angeles and Long Beach to the transcontinental rail network east of downtown Los Angeles, provides one example of a major port connector that facilitates the movement of containerized freight to the East Coast as well as the Midwest. The federal government’s expenditures for surface transportation infrastructure, including port connectors, are based, in part, on the user pay principle. The government collects taxes and fees, which flow into the Highway Trust Fund—historically the principle mechanism for funding federal highway programs. The Highway Trust Fund generally provides for the construction, reconstruction, restoration, and rehabilitation of roads that serve both freight and non-freight users. State and local governments also invest in public highways and roads. Within the federal-aid highway program, the federal government generally is responsible for funding 80 to 100 percent of highway project costs, while state and local governments are responsible for the remainder. local governments collect revenue from a combination of fuel taxes, vehicle taxes and fees, and tolls. State and local governments supplement user fees with general revenues to support highway and road activities. Federal funding for highways is provided to the states primarily through a series of formula grants collectively known as the federal-aid highway program. Freight mobility is dependent on MTS infrastructure, and we have published a number of reports addressing surface transportation issues identifying a variety of challenges to freight mobility in the United States. We have highlighted challenges such as: facilitating the efficient movement of freight and the growing demand for freight transportation; adding capacity to accommodate that increased demand; limited investment from federal, state, and local government in freight projects; and including freight projects in the state and local transportation planning process. We have also reported on the numerous federal goals for surface transportation and the lack of clarity in federal stakeholder roles. For example, DOT operating administrations with roles in freight transportation include the Federal Highway Administration (FHWA), Federal Railroad Administration (FRA), Federal Motor Carrier Safety Administration, and Maritime Administration (MARAD). An office of freight management and operations within FHWA administers programs, develops policies, and undertakes research that promotes freight movement across the nation and its borders, but the office does not coordinate federal actions such as federal funding related to freight mobility. We have previously reported that although there is a clear federal interest in freight transportation, there has not been a strategy that clearly defines the federal role or a mechanism to implement a national freight strategy. In the past, we have recommended or proposed for congressional consideration a number of actions to address this issue. On July 6, 2012, MAP-21 was enacted into law and authorized funding for 2 years to core federal-aid highway and transit programs. This legislation establishes a framework for a national freight policy and directs DOT to develop a national freight network and a National Freight Strategic Plan. It encourages states to develop freight plans with a description of procedures states will use to make investment decisions involving freight transportation. It also authorized the increase of the federal cost share of freight-related projects on Interstate highways to 95 percent and to 90 percent on other roads if the Secretary of Transportation certifies that the projects meet specified requirements. On July 19, 2012, the President announced the establishment of a White House-led task force to develop a federal strategy to inform future investment decisions and identify opportunities for improved coordination and streamlined review of investments in coastal port infrastructure. The task force is comprised of senior officials from five departments and five White House offices and plans to build on steps already taken to coordinate across agencies with port-related responsibilities. A Presidential Directive in the U.S. Ocean Action Plan, issued in 2004, elevated the existing Interagency Committee on the Marine Transportation to a Cabinet-level body, and created the Committee on the Marine Transportation (CMTS). The CMTS adopted a charter in 2005 creating a partnership of federal agencies with responsibility for the MTS to ensure the development and implementation of national MTS policies consistent with national needs and report to the President its views and recommendations for improving the MTS. The CMTS is a federal cabinet-level, interagency organization chaired by DOT and supported by a sub-cabinet policy advisory body, the Coordinating Board, a dedicated staff body, the Executive Secretariat, and Integrated Action Teams. According to the committee’s charter, the CMTS is responsible for: promoting the environmentally sound integration of marine improving federal MTS coordination and policies; transportation with other modes of transportation and with other ocean, coastal, and Great Lakes uses; developing outcome-based goals and strategic objectives for the safety, security, efficiency, economic vitality, environmental health, and reliability of the MTS for commercial and national defense requirements as well as a method for monitoring progress towards those goals; coordinating budget and regulatory activities that impact the MTS; and recommending strategies and implementing plans to maintain and improve the MTS. In July 2008, the CMTS published a National Strategy for the Marine Transportation System (Strategy) to address challenges to improving the MTS and ensuring that policies and actions of CMTS agencies are synchronized and coordinated. The Strategy provided a policy framework for the MTS for 2008 through 2013 and recommended 34 actions in 5 priority areas including capacity, safety and security, environmental stewardship, resilience and reliability, and finance. The Corps and DOT have programs that can be used to address three key infrastructure segments of the MTS. Specifically, the Corps is responsible for navigable waterways’ infrastructure and provides funding through its navigation program. Projects that improve or maintain ports and port connectors can receive federal funding or financing through a variety of programs administered by the DOT. The Corps’ navigation program is responsible for maintaining navigable harbors, channels, and waterways and supporting structures—such as locks, dams, and jetties—for the MTS. Primary responsibilities of the navigation program include dredging to maintain channel depths at U.S. harbors and on inland waterways as well as planning, constructing, rehabilitating, operating, and maintaining navigation channels, locks, dams, and other structures. The Corps maintains only the federally designated channels in inland and coastal harbors, the depth and width of which are authorized by Congress. Increases in a navigation channel’s authorized depth or width—referred to as construction or “new work”—are also congressionally authorized. The Corps’ navigation program activities are generally funded from the Energy and Water Development Appropriations Acts. Funding requests in the President’s Budget for the navigation program that primarily supports Corps activities to maintain and improve navigable waterways have decreased from $2 billion in fiscal year 2008 to $1.58 billion in fiscal year 2012. More specifically, the navigation program has decreased as a percentage of the President’s budget for the civil works program from 41 percent in fiscal year 2008 to 34 percent in fiscal year 2012. Similar decreases occurred in obligations from three of the four separate appropriations accounts that support the Corps’ maintenance and improvement activities for navigable waterways: the (1) Investigations, (2) Construction, and (3) Operation and Maintenance accounts. According to a senior Corps official, a separate Mississippi River and Tributaries appropriations account—which is used primarily for flood control—can provide additional funds for investigations, construction, and operation and maintenance. As shown in Table 1 below, our analysis of Corps data found that the Corps’ total obligations for these accounts have decreased from over $3 billion in fiscal year 2009 to about $1.8 billion in fiscal year 2011, a reduction of approximately 41 percent.funds in each fiscal year are obligated for operation and maintenance activities. We identified three key challenges to maintaining and improving MTS infrastructure. First, aging infrastructure on the nation’s waterways, ports, and port connectors may hinder the efficient movement of freight. Second, the Corps and DOT are faced with more demands for maintaining and improving MTS infrastructure than available federal funding allows. Third, while the Corps and DOT have taken some steps to prioritize funding within their purview for all three segments of the MTS that we reviewed, there is no system-wide strategy for prioritizing MTS investments. The Corps is facing challenges maintaining and improving navigation infrastructure, such as dredging channels and repairing locks. For example, according to navigation program officials responsible for managing the deep draft Mississippi River channel between Baton Rouge and the Gulf of Mexico, increased dredging costs have precluded the Corps from being able, as of fiscal year 2011, to maintain the Mississippi River channel at its fully authorized width and depth. Figure 2 below shows the Mississippi River at the Port of South Louisiana. As a result of the channel’s shoaling, the New Orleans-Baton Rouge Steamship Pilot’s Association, which is responsible for operating vessels on the lower Mississippi River, began placing restrictions on certain sections of the river when conditions warrant. According to Corps officials, these restrictions can increase the time and cost of shipping services and the channel shoaling may have a negative impact on safety. Structures that support navigation channels, such as jetties, are also aging and in need of rehabilitation. For example, the jetties at the mouth of the Columbia River, which help to maintain the depth and orientation of the shipping channel and provide protection for ships from waves entering and exiting the river, are about 100 years old. The Corps’ Portland District recently completed a major rehabilitation report for the jetties, with prescribed near-term repairs, as well as more significant rehabilitation to be pursued between 2014 and 2020. The Pacific Northwest Waterways Association has indicated that these jetties are of critical importance to shippers in the region. The locks and dams that support navigation on the nation’s inland waterway system are also aging, resulting in decreased performance and costly delays. Over one-half of the Corps’ 241 locks at 196 sites have exceeded their 50-year service life, requiring increased maintenance to keep them functioning. Figure 3 shows the age of the nation’s navigation lock inventory. As locks age, repair and rehabilitation become more extensive and expensive, according to the CMTS. Corps officials told us that, at current funding levels, replacement of the Inner Harbor Navigation Canal lock (Industrial Canal), a vital link that connects the Mississippi River to the Gulf Intracoastal Waterway system in New Orleans, may not occur until 2030. Moreover, according to the Corps, the current lock, which was completed in 1921, is too small to accommodate modern day vessels. See figure 4 below. Corps officials attributed this delay to the years of planning and community involvement needed to reach consensus on the lock design, as well as insufficient resources to address the lock replacement because of other construction projects.provide a nearly three-fold increase in lock chamber capacity; however, Corps officials told us that project costs have also increased considerably over time, with current construction costs estimated at $1.5 billion. The planned replacement lock will The Corps uses performance indicators to measure the performance of its locks. Each year the Corps measures its performance in meeting a number of high priority goals, and as part of this effort, the Corps assesses the extent to which the navigation projects are meeting authorized purposes and evolving conditions. The Corps has developed performance metrics for navigation operation and maintenance activities to provide an indicator of the extent to which the Corps is meeting those goals. Recent data illustrate the effect that aging infrastructure is having on MTS performance (see table 2). These metrics show that the hours of scheduled and unscheduled lock closures because of mechanical failures have increased since fiscal year 2009. Moreover, according to a senior Corps navigation program official, there has been a consistent trend of deteriorating lock performance since 2000. For some indicators, such as the number of preventable lock closures over 24 hours, performance in 2011 was better than in 2010; however, the performance of the locks still failed to meet Corps’ targets for 2011. Also, in fiscal year 2011, the Corps did not meet performance targets for locks at both inland waterways and coastal ports and harbors. The nation’s road connectors at ports are used by trucks with heavy loads and are often in poor condition. DOT has reported that much of the nation’s freight transportation infrastructure was developed before 1960 to serve industrial and population centers in the Northeast and Midwest. Since 1960, however, there have been fundamental changes in the American economy as the population and manufacturing have grown in the South and West Coast. According to DOT, the growth in freight transportation is a major contributor to congestion in urban areas and congestion in turn affects the timeliness and reliability of freight transportation. In its December 2000 report to Congress, DOT found that many of the nation’s intermodal road connectors to ports were under- maintained. For example, highway connectors to ports had twice the percentage of pavement deficiencies as non-Interstate National Highway System routes. In that study, DOT found that 15 percent of the port connector mileage, which it defined as the roadway used by trucks to travel between major highways and ports, was in poor or very poor condition. More recently in 2004, DOT reported that about one-third of the port connector system was in need of additional capacity because of current congestion and that over 40 percent of the port connector mileage needs some type of pavement or lane-width improvement. Prior surface transportation legislation did not specifically address the condition of port connectors on a systematic basis, but the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) established a Freight Intermodal Distribution Pilot Grant Program to, among other things, facilitate and support intermodal freight transportation activities at the state and local levels to relieve congestion. This program included $30 million for six designated projects aimed at relieving congestion at intermodal facilities, including several ports. Efficient freight movement at ports may also be hindered by aging rail infrastructure, especially key bridges. According to officials from the MPO serving the New Orleans metropolitan area, one of the most pressing rail infrastructure needs at the Port of New Orleans is replacement of the Almonaster Avenue Bridge, which is a central link in the east-west rail traffic across the southern United States handling numerous trains per day. The existing bridge was completed in 1920 and is structurally deficient—in its closed position the bridge provides only one to two feet of vertical clearance above the average water level and must open to virtually all marine traffic. Although the bridge is part of the national highway system, making it eligible for federal funding, it is not a part of the state highway system, and therefore ineligible for state funding, according to state officials. At issue is whether the Port of New Orleans, which owns the bridge, should pay for its share of the $65 million bridge replacement, because the transportation benefits that would come from the bridge’s replacement would accrue to the nation. Today, the Corps is faced with more demands for maintaining and improving aging navigation infrastructure than available federal funding allows. According to Corps navigation program data, current authorization of appropriation amounts for navigation construction projects exceeds the amount appropriated by $13.5 billion, and the current estimated operation and maintenance backlog is $3.4 billion, assuming current funding levels. The data only include the federal shares and do not include the non- federal share of the costs provided by project stakeholders.factors have been identified as contributing to the size of the current navigation program backlog, including: authorizations that have outpaced appropriations in recent years; the aging of existing infrastructure, which requires more funds for operations, maintenance, and rehabilitation; and rapidly increasing costs to construct water infrastructure projects, in part because of price increases for construction materials and fuels. Other reasons for the increase include the cost associated with environmental mitigation and disposal of dredged material. For instance, according to the Corps, features to mitigate the environmental impact account for 45 percent of the total $652 million cost of the Savannah Harbor Expansion Project. In addition, Corps officials told us that the lack of proximate dredged-material disposal areas and the mitigation costs for feasible alternative sites dramatically increases project costs. Keeping up with the investment requirements of modern port operations has become a major challenge for many ports, especially at the nation’s small and medium-sized commercial ports. According to a senior MARAD official, the majority of the nation’s port infrastructure was built in the 1960s, and this infrastructure is now at the end of its useful life and in need of rehabilitation and modernization. As the TIGER program has demonstrated and as MARAD officials concur, port infrastructure development and modernization needs outweigh current funding. According to DOT, in fiscal year 2012 over 80 ports submitted TIGER pre- applications for port development projects representing a variety of port types, including large sophisticated container ports as well as smaller commercial fishing ports, and DOT provided TIGER grant funding to 8 port infrastructure projects. One of the challenges facing ports is installing adequate infrastructure to handle new larger post-Panamax vessels, which are expected to begin calling at U.S. Gulf and East Coast ports after the expansion of the Panama Canal is completed in 2014. Post-Panamax vessels, for example, require bigger cranes, which can cost over $25 million each, and more staging areas to accommodate peak cargo flow. Some ports, like Georgia Port Authority’s Garden City Terminal at the Port of Savannah, have invested heavily to ensure that the port is ready to accommodate the new larger vessels. According to DOT’s most recent estimate, $4.3 billion is needed to improve the condition of the nation’s port connectors. We have previously reported that the nation’s surface transportation system, including port connectors, is under growing strain, and the cost to repair and upgrade the system to safely and reliably meet current and future demands may exceed what the nation can afford. Both the Corps and DOT are taking some steps to prioritize funding within their purview for all three segments of the MTS that we reviewed. We have previously reported that a systematic approach to help guide decisions on federal investment in the MTS is needed because of the growing awareness of, and agreement about, the need to view the various transportation modes that comprise the MTS from an integrated standpoint, particularly for the purposes of developing and implementing a federal investment strategy. The Corps has taken steps to prioritize limited funding within its navigation program and civil works budget process. Within the navigation program specifically, the Corps has developed the Operational Condition Assessment tool for all inland navigation structures, such as locks and dams, to ensure that structures are consistently assessed and to provide relative risk ratings and project ratings. The Corps is developing a similar tool for rating coastal navigation structures, such as jetties, and Corps officials expect this tool to inform the Corps’ fiscal year 2014 budget. For navigation channels, the Corps is developing a uniform framework tool, anticipated to be available for fiscal year 2015 to assess the condition of all navigation channels. With respect to its civil works program, the Corps began using performance-based budgeting beginning in fiscal year 2006 as a way to focus funding requests on those projects with the highest anticipated return on investment. Under the current civil works budget formulation process, the Corps uses performance metrics and a benefit- to-cost ratio to evaluate projects’ estimated future outcomes and gives priority to those with the highest expected returns for the national economy and the environment. In part, the Corps focuses on anticipated outcomes because most of the construction and investigation projects being considered in its civil works budget requests are new or have not yet been completed, and thus have not generally begun to achieve benefits. Because the Operation and Maintenance account includes projects that have already been constructed, the Corps incorporates ongoing performance information, such as assessments of whether infrastructure meets current engineering and industry standards. Nevertheless, the number of investigations and construction projects receiving appropriations is typically greater than the number requested, and as we have previously reported, the Corps’ budget presentation does not include an explanation of the relative priority given to projects or how they are evaluated against each other. In addition to these efforts, the Corps recently issued a report to provide advice on how Congress should address the need for additional port and inland waterway modernization to accommodate post-Panamax vessels. The Corps reported that it is critical that the U.S. develop and move forward with a strategic vision for ensuring adequate investment in maintaining navigation infrastructure and for facilitating the strategic targeting of investments to ensure that the United States is ready for these larger vessels when the expanded Panama Canal opens in 2014. The Corps also presented a variety of financing options to initiate a national discussion of possible paths to meet the challenge of modernizing MTS infrastructure. DOT has a more limited ability to prioritize funding for port infrastructure projects given the structure of federal surface transportation funding. The vast majority of DOT funding goes directly to state DOTs through formulas where decisions about transportation priorities are made at the state and local level. In fiscal year 2011, FHWA provided states with over $39 billion in federal-aid highway funding. The statewide transportation planning process is the forum through which states decide how to spend significant amounts of federal transportation funds. This process is informed by MPOs that lead transportation planning in urbanized areas—geographic areas with populations of 50,000 or more. Although states must comply with federal planning requirements administered jointly by FHWA and the Federal Transit Administration, states have considerable discretion to allocate federal funds and select projects. According to a senior DOT official, states and MPOs make the decisions about how best to prioritize their formula funding and how to integrate port infrastructure projects into their transportation plans. However, as we have previously reported, data limitations and the lack of performance measures for these projects can make it difficult to quantify the benefits of these projects and to achieve state-wide or community support. DOT’s competitive grant and credit programs provide one opportunity for the agency to prioritize funding for port infrastructure, yet funding for these projects is relatively limited compared to formula funding. For example, in fiscal year 2012, DOT had $500 million in TIGER funds to obligate across all modes for a variety of transportation projects and $122 million in budget authority for the TIFIA program. transportation legislation, MAP-21, significantly expands the TIFIA program by authorizing $750 million in budget authority in fiscal year 2013 and $1 billion in fiscal year 2014 to pay the subsidy cost of supporting federal credit. According to FHWA, a $1 billion TIFIA authorization of budget authority will support about $10 billion in actual lending capacity. According to DOT, actual TIFIA lending capacity is subject to the calculation of the estimated subsidy cost for each credit assistance transaction. The amount varies based on the risk profile of the project and the repayment stream. According to DOT, actual original subsidy rates have ranged from less than 1 percent to over 15 percent of the TIFIA credit assistance received. See GAO, Surface Transportation: Financing Program Could Benefit from Increased Performance Focus and Better Communication, GAO-12-641 (Washington, D.C.: June 21, 2012). MAP-21 also calls for a number of significant program reforms including a 10 percent set-aside for rural projects and an increase in the share of eligible project costs from 33 percent to 49 percent. Projects that received credit assistance through TIFIA tend to be large, high-cost highway projects. Even with the additional budget authority authorized for the TIFIA program, DOT officials told us that the funding process is driven by applicants as opposed to a national assessment of priority. Moreover, port projects may not always compete well against other transportation-funding projects. According to DOT officials, ports may be less accustomed to the processes and procedures involved in applying for federal funds, making it harder for them to compete for competitive grants and loans. Given the short program timelines for the TIGER program, some ports may have difficulty meeting timelines given the complexity of their proposals. Additionally, port applicants may not be as familiar with developing and completing federal environmental review requirements making it difficult to remain eligible for funding. According to one senior MARAD official, many ports lack sufficient expertise to conduct early planning or are not well positioned to leverage existing relationships with state DOTs. As a result, some ports may be less prepared to participate in DOT’s competitive funding processes and compete against applicants with more experience participating in the federal funding process. A number of efforts are under way to address MTS challenges through better coordination of federal investments. Specifically, the Corps and DOT are taking steps to better coordinate MTS infrastructure investments between the two agencies. Other federal efforts such as a government- wide task force, advisory groups, and an interagency coordination committee also have been established to address MTS issues. While these federal efforts to align and better coordinate MTS infrastructure investments are good steps, some are limited in their scope and, for others, it is not clear how effective they will be in addressing the complex and wide-ranging challenges to maintaining and developing MTS infrastructure. In March 2012, DOT and the U.S. Department of the Army signed a memorandum of understanding (MOU) to identify and capitalize on opportunities to improve the nation’s transportation infrastructure investments. Specifically, DOT and the U.S. Department of Army agreed to (1) develop project prioritization criteria consistent to the greatest degree possible, (2) look for opportunities to reflect national priorities for waterside and landside infrastructure investment alignment, and (3) coordinate project evaluation and selection processes as it relates to DOT grant programs and the Corps’ project prioritization. Although it is too early to assess progress made in achieving these objectives, senior DOT and Corps officials told us that the MOU played an important role in ensuring interagency coordination on MTS infrastructure investments for the last round of the TIGER program. However, as noted above, the bulk of DOT’s transportation funding is directed through state and local transportation agencies. MARAD, the one federal entity charged with an MTS-wide mission, has few programs to address system-wide challenges and a limited field presence. MARAD is developing the Port Infrastructure Development Program to improve the state of repair of all U.S. ports and enhance the competitiveness of ports for public and private funds through comprehensive planning. According to a senior MARAD official, the program is being designed to create a level playing field for all ports, including small- and medium-sized ports, to attract private-sector financing, and it is being developed together with MTS stakeholders. However, despite MARAD’s efforts to obtain consensus on the program from MTS stakeholders, the program has not been funded, and MARAD officials acknowledge that the agency has more work to do to ensure that its staff have the right skill set and expertise needed to manage the program. Moreover, several MTS stakeholders whom we met with during our site visits told us that MARAD does not currently have a major role to play in MTS infrastructure development. For example, local transportation-planning officials we spoke to in one major coastal city said that MARAD representatives are not at the table during the MPO’s planning process, and therefore, DOT is missing an opportunity to coordinate investments in the various MTS segments. The recently enacted MAP-21 also provides an opportunity to better coordinate investments in the MTS. First, MAP-21 directly addresses the fragmented nature of DOT programs, including those that address ports and port connectors, by consolidating the number of federal-aid highway programs to focus resources on key national goals. While MTS stakeholders we met with generally told us they appreciated having access to a variety of federal transportation programs that can be used for surface transportation projects, we have previously reported on coordination challenges within DOT that result from a modal approach to administering and funding programs. Second, MAP-21 establishes a national freight policy and mandates that DOT develop a National Freight Strategic Plan and a national freight network. Specifically, in the development of the National Freight Strategic Plan MAP-21 requires DOT to consult with state departments of transportation and other appropriate public and private transportation stakeholders. As we have previously reported, to develop an effective strategic plan, agencies should involve their stakeholders, assess their internal and external environments, and align their activities, core processes, and resources to support mission-related outcomes.noted above, both the Corps and DOT have taken some steps to invest in their respective segments of the MTS. However, there has been limited coordination of MTS investments system-wide. The National Freight Strategic Plan is an opportunity to address the MTS system-wide by considering the Corps’ future investments in navigable waterways. Involving the Corps in the development of that plan is particularly important given the nexus between freight and the entire MTS, since the vast majority of the nation’s freight is imported and exported via navigable waterways through our nation’s ports. As In addition to these Corps and DOT-specific efforts, there are a number of other federal efforts that have been recently created to address MTS infrastructure investment system-wide. On July 19, 2012, the White House established a Task Force on Ports to develop federal strategies to address coastal port infrastructure investments. This high-level effort is designed to address specific issues and provide immediate benefits to, among other things, help ensure that the nation’s navigable waterways and ports are prepared to handle any increase in trade expected from the expansion of the Panama Canal in 2014. In particular, the task force plans to examine challenges to coastal ports including increased competition from ports in Canada and the Caribbean and is tasked with developing a strategy to inform future investment decisions and identify opportunities for improved coordination and streamlined environmental review of investments in port-related infrastructure. According to the White House, the establishment of the task force responds to calls from state and local governments, as well as ports and other maritime stakeholders, for a more strategic framework for allocating federal investments. While this particular effort targeting coastal ports provides an immediate focus on some of the most pertinent MTS infrastructure challenges, it is too soon to know how the task force’s efforts will be realized and whether it will provide the long-term commitment and management needed to address MTS challenges. We also identified two federal advisory groups established to advise the federal government agencies on system-wide MTS issues. Federal advisory groups can play an important role in the development of policy and government regulations by providing advice to federal agency policymakers. For example, the Marine Transportation System National Advisory Council (MTSNAC) was established to, among other items, provide advice to the Secretary of Transportation via the MARAD Administrator on marine highways and ports and their road, rail, and marine highway connections. Members of MTSNAC reflect a cross section of maritime industries and port and water resources stakeholders from the private sector, academia, labor, and federal, state and local entities. In addition, the Advisory Committee on Supply Chain Competitiveness was recently established to advise the Secretary of Commerce on the necessary elements of a comprehensive national freight policy designed to support U.S. export growth and competitiveness, among other items. The committee consists of 40 private-sector members, including representatives from supply chain firms and their associations, stakeholders, academia, community organizations, and others directly affected by the supply chain. These two federal advisory groups provide an opportunity for federal agencies involved in the MTS to obtain input from internal and external stakeholders such as academics, industry associations, or other agencies to address MTS challenges. The Committee on the Marine Transportation System (CMTS), created to address a broad range of MTS challenges, provides another opportunity to coordinate MTS infrastructure investment system-wide. Established in 2004 by a directive from the President in the U.S. Ocean Action Plan, the CMTS is a long-standing committee designed to foster a partnership of federal agencies with responsibility for the MTS and to provide a forum through which agencies coordinate and take action to address a wide range of MTS challenges. For example, the CMTS reported in 2010 that multi-agency efforts to address navigation technology issues could lead to significant improvements to navigation safety information, especially in and around ports. Specifically, the Corps, the National Oceanic and Atmospheric Administration, and the U.S. Geological Survey have developed, published and adopted common data standards. According to the CMTS, these efforts provide improved delivery of navigation information and enable agencies to better share information of navigational value. Similarly, to build on the MOU signed between the DOT and the U.S. Department of the Army to coordinate and improve infrastructure investment between the two agencies, the CMTS Coordinating Board agreed in June 2012 to establish a CMTS Infrastructure Investment Integrated Action Team to provide a forum for participation by other agencies that are stakeholders in MTS infrastructure. In July 2008, the CMTS published the National Strategy on the Marine Transportation System (Strategy) to provide a framework and 5-year action plan to address MTS challenges. The Strategy is intended to present the most pressing challenges facing the MTS and provide a framework for addressing MTS needs through 2013. It recommends 34 actions to address these issues, some of which touch upon key challenges we identified. For example, to address challenges related to the prioritization of federal investments in the MTS, it recommends studying approaches to prioritizing how federal dollars should be allocated among competing priorities as well as studying how best to coordinate allocation of federal funds for projects across agencies. Similarly, to address infrastructure capacity issues the CMTS recommended that agencies publish valid, reliable, and timely data on the MTS including cargo movements, capacity, and productivity as well as develop performance measures to assess the productivity of the MTS and the risk of potential infrastructure failures. The CMTS has taken steps to address some of the recommended actions included in the Strategy. According to a 2010 implementation plan, the CMTS developed a list of six priority actions taken from the Strategy’s 34 recommended actions and identified 3 other priorities that address emerging issues. According to CMTS officials, when at least three CMTS members agree to address a long-term MTS issue, they may form an Integrated Action Team or subcommittee. CMTS guidance states that, once formed, these teams operate on a consensus basis and are responsible for preparing an action plan that, among other things, includes (1) a list of deliverables, (2) a schedule for completing them, (3) identification of the parties responsible for completing them, and (4) funding sources available. For example, the CMTS Coordinating Board established the Research and Development Integrated Action Team in March 2009 to respond to several recommended actions included in the Strategy, including the need for valid data and for the development of performance measures. CMTS members may also establish task teams to address short-term issues; however, these teams are not responsible for developing an action plan. For example, in December 2011 the National Export Initiative task team was established in support of the President’s National Export Initiative to, among other things, monitor the availability of export containers. CMTS officials noted that, although the National Export Initiative is not addressed in the Strategy, the CMTS must be flexible to adapt to and address new MTS issues as they emerge. While the CMTS has taken steps to address a number of recommended actions identified in the Strategy and has made progress facilitating interagency cooperation, it is unclear if those steps have achieved their intended results. Moreover, we found some limitations to the implementation of the Strategy, including: The CMTS has not kept the Strategy up to date and has no plan to replace the Strategy’s 5-year action plan. Although the CMTS website states that the Strategy is a “living document” to be enhanced and updated, CMTS officials told us that agencies had not updated the Strategy since it was published in 2008 and have no current plans to do so. As a result, the Strategy does not specifically address new and emerging challenges, such as the President’s National Export Initiative. CMTS officials told us that updating the Strategy would be useful and that—should sufficient resources be available—the CMTS would review the recommendations of the Strategy and update them with respect to current and projected needs of the MTS. An up-to-date Strategy that reflects the most important challenges can help ensure agencies remain focused on key priorities and help stakeholders, including the Congress, target limited resources to those priorities. The CMTS did not incorporate clear desired results, specific milestones, and outcome-related performance measures throughout the Strategy to help ensure steps taken achieve the intended results. While CMTS member agencies have taken steps to introduce accountability mechanisms through action plans developed by individual Integrated Action Teams, action plans were only developed for those areas or activities where consensus existed among agencies to establish them. For other areas, the Strategy’s recommended actions remain—as a CMTS response to Congress describes—broad in scope, rather than finite, individually defined tasks. While identifying broad objectives is a good first step, without a clearly defined and articulated “end-state” for each recommended action, it is difficult to evaluate the extent to which progress has been made or determine whether the CMTS is achieving its intended results. Furthermore, CMTS officials told us that identifying broad actions was the only way to gain consensus among all CMTS member agencies when the Strategy was developed. However, without incorporating accountability mechanisms throughout the Strategy, agency and Congressional decision-makers may lack information needed to evaluate progress and determine the extent to which agency activities are achieving their intended results to address MTS challenges. We have previously identified desirable characteristics that we believe would provide additional guidance to responsible parties for developing and implementing national strategies. Those characteristics include incorporating accountability mechanisms, such as the clear identification of priorities, specific milestones, and outcome-related performance measures. National strategies are intended to provide broad direction and guidance—rather than be prescriptive, detailed mandates—to the relevant implementing parties. Nonetheless, a more detailed strategy can facilitate implementation and help agencies achieve strategic goals. The CMTS does not have a process for reporting the extent to which the Strategy’s recommended actions have been addressed. Such a process could enable more effective oversight and accountability. Although the CMTS created reports in 2009 and 2010, these reports describe its annual accomplishments and do not address all of the Strategy’s recommended actions. For example, the CMTS annual report for 2010 states that it summarizes “the high points and accomplishments achieved” by the CMTS. We have previously reported that including a process for reporting on progress could help agencies implement national strategies more effectively. According to CMTS officials, with no budget and limited member resources, the Strategy’s recommended actions were prioritized resulting in a set of six top priority actions, with the work done on these priority actions reflected in the 2009 and 2010 annual reports. However, without a schedule for regular reporting on the extent to which all recommended actions included in the Strategy have been addressed, agency and congressional decision-makers lack key information needed to hold agencies accountable and enable effective oversight. Finally, according to the CMTS, activities undertaken by the CMTS are dependent on member agencies’ ability to dedicate resources and staff support. CMTS officials told us that commitment of necessary staff time and resources to CMTS activities is driven by CMTS member interest in the work to be done and the availability of resources. Specifically, CMTS members that participate in Integrated Action Teams or task teams provide time and resources to carry out their responsibilities, which range from full staff support to providing comments on documents. In addition, MARAD, the Corps, and the National Oceanic and Atmospheric Administration dedicate one full time senior staff to the CMTS’s Executive Secretariat. Managing competing priorities and coordinating interagency actions are key challenges given the complex nature of the MTS and the variety of task forces, advisory groups, and other MTS stakeholders involved in supporting the MTS. However, these challenges also highlight the benefits and opportunities of ensuring that the Strategy remains up to date, reflects current conditions, and is focused on the areas of greatest need. Given aging MTS infrastructure, the uncertainty around the Panama Canal expansion and its potentially significant impact on the MTS, and the renewed focus on ports and their importance to the U.S. economy, improving the effectiveness of federal MTS efforts is critical. There are a variety of efforts under way—recent and long standing—to help the wide range of MTS stakeholders coordinate to address system-wide prioritization of MTS investments. For example, efforts such as the recently announced White House Task Force on Ports directly address some of the challenges facing the nation’s MTS infrastructure. While the task force plans to build on some of the more recent steps taken to improve coordination of port-related responsibilities, it is too soon to know how the task force will proceed and the extent to which it will leverage more established long-standing efforts in this area. Moreover, the recent proliferation of efforts to address system-wide investment in the MTS runs the risk of being less effective unless properly coordinated. The recently passed MAP-21 will focus efforts on improving freight mobility and the surface infrastructure that supports it, but it also provides an opportunity to better coordinate MTS investments system-wide. Besides establishing a framework for a national freight policy, MAP-21 requires DOT to develop a National Freight Strategic Plan in consultation with appropriate state DOTs and other appropriate private and public stakeholders. While the National Freight Strategic Plan requirements do not specifically mention consultation with the Corps and its plans to maintain and develop the nation’s navigable waterways, consideration of these waterside infrastructure investments is important to strategically investing in the MTS system-wide. Considering all MTS segments— navigable waterways, ports, and port connectors—and coordinating the prioritization of infrastructure investments between the Corps and DOT will help to ensure that limited resources are efficiently targeted and invested. The CMTS, a long-standing interagency coordinating committee, is tasked with addressing a wide array of MTS challenges. The committee has made some progress facilitating information sharing, coordinating member agencies and taking some actions to address a variety of MTS issues. However, it is unclear if the committee’s actions have improved the MTS. Given the breadth and complexity of the MTS challenges and the numerous stakeholders and on-going efforts, an up-to-date Strategy with mechanisms to measure progress and hold member agencies accountable for these actions is critical. Interagency coordinating bodies such as the CMTS face a variety of obstacles and gaining consensus on priorities, measuring progress and holding member agencies accountable can be challenging. However, without developing a sound Strategy that considers the changing landscape of MTS efforts, the CMTS will not be able to capitalize on its established coordinating body or to effectively contribute to the growing number of federal efforts to support the nation’s Marine Transportation System. To help ensure coordination of U.S. Army Corps of Engineers and Department of Transportation infrastructure investments in the Marine Transportation System, we recommend that the Secretary of Transportation take the following two actions: 1) Direct the Administrator of the Federal Highway Administration to inform the development of the National Freight Strategic Plan with information from the U.S. Army Corps of Engineers’ planned investments in the nation’s navigable waterways. 2) As the Chair of the Committee on the Marine Transportation System, ensure the review and update, as needed, of the National Strategy for the Marine Transportation System. In ensuring the review and update of the National Strategy for the Marine Transportation System, the Secretary should: establish accountability mechanisms—such as developing clear and desired results, specific milestones, and outcome-related performance measures—for the recommended actions of the National Strategy for the Marine Transportation System, and establish and implement a schedule for regular reporting of progress made in addressing the recommended actions of the National Strategy for the Marine Transportation System. We provided a draft of this report to the Corps and DOT for review and comment. DOT agreed to consider the report’s recommendations. The Corps and DOT also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Defense, Secretary of Transportation, and the Chief of Engineers and the Commanding General of the U.S. Army Corps of Engineers. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or stjamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributes to this report are listed in appendix IV. The objectives of this report are to (1) identify programs the U.S. Army Corps of Engineers (Corps) and the Department of Transportation (DOT) administer to maintain or improve the Marine Transportation System (MTS); (2) determine the key challenges to maintaining and improving the MTS; and (3) discuss opportunities that may exist for the federal government to improve the effectiveness of its role in the MTS. To identify programs the Corps and DOT administer to maintain or improve the MTS we reviewed and analyzed federal program documentation, including authorizing legislation, federal program guidance, and other federal program reports describing federal roles and responsibilities for MTS infrastructure. We reviewed legislation related to surface and MTS infrastructure programs and funding, including the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) and the new surface transportation reauthorization, Moving Ahead for Progress in the 21st Century Act (MAP-21). We interviewed officials from the Corps’ civil works navigation program at the headquarters, division, and district level to determine how the Corps maintains and improves navigation infrastructure on inland and coastal waterways. We also interviewed officials from DOT including officials from the Federal Highway Administration (FHWA), Federal Railroad Administration (FRA), Maritime Administration (MARAD), and Office of the Secretary of Transportation to confirm the federal transportation programs and discuss how these programs are used to support ports and port connectors. In addition, we interviewed officials from the Waterborne Commerce Statistics Center to determine data efforts to support the Corps’ navigation program, and we reviewed transportation statistics—including freight commodity and port statistics— from the Bureau of Transportation Statistics. We also conducted interviews with a variety of industry associations including the American Association of Port Authorities, American Association of State Highway and Transportation Officials, American Trucking Association, the American Association of Railroads, and the Waterways Council, Inc., to obtain their perspectives on federal Corps and DOT programs. We obtained program budget data for programs that may be used to support MTS infrastructure by reviewing budget documentation, including annual budget justifications, from the Corps and DOT. We used navigation project obligations data provided by the Corps to determine program obligations for the Investigations, Construction, and Operation and Maintenance accounts. To determine obligations for DOT programs, we developed a short data collection instrument to collect and analyze financial obligations data. We administered the data collection instrument to obtain data from a total of 16 DOT programs, including 11 FHWA programs, 2 FRA programs, 2 MARAD programs and 1 Office of the Secretary of Transportation program. We conducted one pretest with FHWA to test the use of our instrument for grant and formula funding programs. We also conducted one pretest with FRA to test the use of our instrument for credit programs. Based on agency input, we revised the data collection instrument and submitted it to the relevant agency for the programs that we identified. We received a 100 percent response rate. We used Corps navigation program data to determine the current backlog for navigation construction and operations and maintenance projects, and reviewed published DOT reports to identify the backlog of projects affecting ports, including port connectors. In determining the reliability of the financial data, we reviewed relevant documentation about the agencies’ data collection and quality assurance processes, talked with knowledgeable officials about these data, and compared these data against other sources of published information to determine data consistency and reasonableness. We determined that the data were sufficiently reliable for the purposes of this report. To determine key challenges to maintaining and improving the MTS, we reviewed GAO work on surface transportation programs and issues related to freight transportation. Our work is informed by prior GAO reports on freight mobility, intermodalism, and marine transportation finance. We also reviewed prior GAO reports assessing the Corps’ organization, budget formulation process, project delivery process, and programs. To obtain current examples of challenges facing port stakeholders at the state and local level, we conducted site visits to the Port of New York and New Jersey, Port of New Orleans, Port of Portland, Port of Savannah, the Port of South Louisiana, and the Port of Vancouver (USA). We identified these ports using the following criteria: existence of current or recently completed navigation, port, or port ranking by total tonnage (domestic and foreign), 2010; ranking by container traffic (domestic and foreign), 2010; ranking by total value of foreign trade shipments, 2010; connector expansion projects; and geographic diversity. For our final selection, we chose larger ports (both in tonnage and container traffic) in order to get representation from (1) both container and bulk ports, and (2) river and coastal ports. We also selected ports that had ongoing or completed expansion projects funded or financed by the federal government and for which the site visits would provide some geographic diversity in experiences. We also included the Port of Vancouver (USA), a small port based on tonnage and container traffic, to provide some context and comparison to larger ports. The results of these site visits are not generalizable, but do provide insights regarding state, local, and private-sector experiences maintaining and improving MTS infrastructure. During the site visits, we collected and reviewed relevant documentation on port operations, projects, and trade statistics. We also interviewed a range of MTS stakeholders during each site visit, including officials from the port, Corps division and district offices, state DOTs, and Metropolitan Planning Organizations (MPO). Table 3 below provides a table of stakeholders that we met with during each site visit. To identify and assess opportunities for the federal government to improve the effectiveness of its role in the MTS, we reviewed documentation from the Committee on the Marine Transportation System (CMTS), including the CMTS Charter and the National Strategy for the Marine Transportation System (Strategy). We interviewed staff from the CMTS Executive Secretariat and observed a session of the Coordinating Board to determine actions taken by the CMTS to implement the Strategy, as well as any opportunities for improvement. During interviews with Corps and DOT officials, and industry associations, we also asked about their perspectives on the federal government role in maintaining and improving the MTS. In assessing the implementation of the Strategy, we reviewed prior GAO reports on enhancing and sustaining federal agency collaborative efforts and evaluated progress in implementing the Strategy. We conducted our review from November 2011 to November 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix includes DOT programs at the time of our review that may fund MTS infrastructure and their obligations data for fiscal years 2009 to 2011. The data reflect overall obligations for each program, and do not represent support for MTS infrastructure projects specifically. The list of programs is not exhaustive; thus, other DOT programs may exist that could fund MTS infrastructure projects. In addition to the contact named above, Sharon Silas (Assistant Director); Jonathan Carver; William Colwell; Bradley Dubbs; Geoff Hamilton; Carol Henn; Vondalee Hunt; Delwen Jones; Joshua Ormond; and Elizabeth Wood made key contributions to this report.
The MTS is integral to the efficient movement of the nation's freight. The MTS includes navigable waterways, ports, and port connectors, such as roads and railways that provide access to the Interstate highway system and the national rail network. According to DOT, approximately 90 percent of America's overseas imports and exports by tonnage move by ship. Consequently, the continued maintenance and improvement of the MTS is essential to sustaining the nation's competitive position in the global economy. This report examines (1) Corps and DOT programs that can be used to maintain or improve the MTS, (2) key challenges to maintaining and improving the MTS, and (3) opportunities to improve the effectiveness of the federal role in the MTS. GAO analyzed information from the Corps and DOT, interviewed relevant agency officials and industry associations, and conducted site visits to six ports--selected based on tonnage, geographic representation, and other factors--to discuss federal, state, and local investment in MTS infrastructure. The U.S. Army Corps of Engineers (Corps) and the Department of Transportation (DOT) use a variety of programs to maintain and improve Marine Transportation System (MTS) infrastructure. The Corps is the lead federal agency responsible for maintaining and improving navigable waterways. Corps data show that obligations for navigable waterways have decreased from over $3 billion in fiscal year 2009 to about $1.8 billion in fiscal year 2011. Most annual DOT funding is provided to states through formulas, and states determine which projects to fund. For example, in fiscal year 2011, the Surface Transportation Program provided $9.5 billion to states for a variety of transportation projects, which may have included port improvements. However, because DOT does not specifically track formula funding used to maintain or improve ports or port connectors, officials were unable to provide GAO the extent to which these funds were used for port improvements, although the officials stated that the number of port-specific projects was likely small. Several DOT grant and credit programs can also provide specific funding to ports, though ports are primarily responsible for maintaining and improving infrastructure on port property. Aging MTS infrastructure, a growing backlog of projects, and the lack of an MTS system-wide prioritization strategy represent key challenges for the Corps and DOT to maintain and improve MTS infrastructure. For example, some structures that support navigation, such as locks, are over 100 years old, and their condition has resulted in deteriorating performance and costly delays to shippers. The Corps and DOT have taken some steps to prioritize their individual funding decisions, but none of these efforts consider MTS infrastructure system-wide. While the Corps is prioritizing projects within its navigation program, DOT has a more limited ability to prioritize funding for port infrastructure projects because the majority of DOT's funding goes to the states where decisions about transportation priorities are made at the state and local level. Two efforts in particular provide opportunities to improve the effectiveness of federal support to MTS infrastructure. First, the recently enacted Moving Ahead for Progress in the 21st Century Act requires DOT to develop a National Freight Strategic Plan and to consult with appropriate transportation stakeholders. However, DOT and the Corps have historically had limited coordination involving system-wide MTS investments. Involving the Corps in the development of the National Freight Strategic Plan is particularly important given the critical role navigable waterways play in freight movement. Second, the Committee on the Marine Transportation System (CMTS), a partnership of federal agencies chaired by DOT, has the opportunity to take further actions to help ensure that its 2008 National Strategy for the Marine Transportation System is reviewed and updated to reflect new and emerging challenges, and that its 34 recommendations to improve the MTS are implemented. One recommendation included studying approaches to allocate federal dollars among competing transportation priorities. However, the Strategy has not been reviewed and updated since the CMTS published it in 2008 and it does not incorporate accountability mechanisms, such as identifying desired results or performance measures, for the recommended actions. Such mechanisms would help ensure that the actions CMTS recommended to improve the MTS are indeed implemented. DOT should (1) inform the development of the National Freight Strategic Plan with the Corps' planned investments in the nation's navigable waterways and (2) ensure the review and update of the National Strategy for the MTS to include accountability mechanisms for the Strategy's recommended actions. DOT agreed to consider the report's recommendations.
Since 1955, the executive branch has encouraged federal agencies to obtain commercially available goods and services from the private sector when the agencies determine that such action is cost-effective. OMB formalized the policy in its Circular A-76, issued in 1966. In 1979, OMB supplemented the circular with a handbook that included procedures for competitively determining whether commercial activities should be performed in-house, by another federal agency through an interservice support agreement, or by the private sector. OMB has updated this handbook several times. Under A-76, commercial activities may be converted to or from contractor performance either by direct conversion or by cost comparison. Under direct conversion, specific conditions allow commercial activities to be moved from government or contract performance without a cost comparison study (e.g., for activities involving 10 or fewer civilians). Generally, however, commercial functions are to be converted to or from contract performance by cost comparison, where the estimated cost of government performance of a commercial activity is compared with the cost of contractor performance in accordance with the principles and procedures set forth in Circular A-76 and the revised supplemental handbook. As part of this process, the government identifies the work to be performed (described in the performance work statement), prepares an in-house cost estimate on the basis of its most efficient organization, and compares it with the winning offer from the private sector. According to A-76 guidance, an activity should not be moved from one sector to the other (whether public to private or vice versa) unless doing so would save at least $10 million or 10 percent of the personnel costs of the in-house performance (whichever is less). OMB established this minimum cost differential to ensure that the government would not convert performance for marginal savings. The handbook also provides an administrative appeals process. An eligible appellant must submit an appeal to the agency in writing within 20 days of the date that all supporting documentation is made publicly available. Appeals are supposed to be adjudicated within 30 days after they are received. Private-sector offerors who believe that the agency has not complied with applicable procedures have additional avenues of appeal. They may file a bid protest with GAO or file an action in court. Circular A-76 requires agencies to maintain annual inventories of commercial activities performed in-house. A similar requirement was included in the Federal Activities Inventory Reform (FAIR) Act of 1998, which directs agencies to develop annual inventories of their positions that are not inherently governmental. The fiscal year 2001 inventory identified approximately 841,000 full-time equivalent commercial-type positions governmentwide, of which approximately 413,000 were in the Department of Defense (DOD). DOD has been the leader among federal agencies in recent years in its use of OMB Circular A-76; the circular’s use by other agencies has been very limited. However, in 2001, OMB signaled its intention to direct greater use of the circular on a government-wide basis. In a March 9, 2001, memorandum, OMB directed agencies to take action in fiscal year 2002 to directly convert or complete public-private competitions of not less than 5 percent of the full-time equivalent positions listed in their FAIR Act inventories. Subsequent guidance expanded the requirement to 15 percent by fiscal year 2003, with the ultimate goal of competing at least 50 percent. Although comprising a relatively small portion of the government’s overall service contracting activity, competitive sourcing under Circular A-76 has been the subject of much controversy because of concerns about the process raised both by the public and private sectors. Federal managers and others have been concerned about the organizational turbulence that typically follows the announcement of A-76 studies. Government workers have been concerned about the impact of competition on their jobs, the opportunity for input into the process, and the lack of parity with industry offerors to protest A-76 decisions. Industry representatives have complained about unfairness in the process and the lack of a level playing field between the government and the private sector in accounting for costs. Concerns have also been raised about the adequacy of the oversight of subsequent performance, whether the work is being performed by the public or private sector. Amid these concerns over the A-76 process, the Congress enacted section 832 of the National Defense Authorization Act, Fiscal Year 2001. The act required the Comptroller General to convene a panel of experts to study the policies and procedures governing the transfer of commercial activities for the federal government from government to contactor personnel. The act also required the Comptroller General to appoint highly qualified and knowledgeable persons to serve on the panel and ensure that the following entities received fair representation on the panel: DOD Persons in private industry Federal labor organizations OMB Appendix I lists the names of the Panel members. The legislation mandating the Panel’s creation required that the Panel complete its work and report the results of its study to the Congress no later than May 1, 2002. The Panel’s report was published on April 30, 2002. In establishing the Panel, a number of steps were taken to ensure representation from all major stakeholders as well as to ensure a fair and balanced process. This began with my selection of Panel members, which was then followed by the Panel’s establishment of a process to guide its work. To ensure a broad array of views on the Panel, we used a Federal Register notice to seek suggestions on the Panel’s composition. On the basis of the suggestions received in response to that notice, as well as the need to include the broad representation outlined in legislation, I personally interviewed potential panel members. I believe that we selected a group of outstanding individuals representative of diverse interest groups from the public and private sectors, labor unions, and academia with experience in dealing with sourcing decisions at both the federal and local government levels. Once convened, the Panel, as a group, took a number of steps at the outset to guide its deliberations and ensure a full and balanced consideration of the issues. The first step was the adoption of the following mission statement: Mission of the Commercial Activities Panel The mission of the Commercial Activities Panel is to improve the current sourcing framework and processes so that they reflect a balance among taxpayer interests, government needs, employee rights, and contractor concerns. The Panel also agreed that all of its findings and recommendations would require the agreement of at least a two-thirds supermajority of the Panel in order to be adopted. The Panel further decided that each Panel member would have the option of having a brief statement included in the report explaining the member’s position on the matters considered by the Panel. In addition to the Federal Register notice soliciting input on issues to be considered by the Panel, the Panel held 11 meetings over the period of May 2001 to March 2002. Three of these were public hearings in Washington, D.C.; Indianapolis, Indiana; and San Antonio, Texas. In the public hearings, Panel members heard testimony from scores of representatives of the public and private sectors, state and local governments, unions, contractors, academia, and others. Panelists heard first-hand about the current process, primarily the cost comparison process conducted under OMB Circular A-76, as well as alternatives to that process. Appendix II provides more detail on the topics and concerns raised at the public hearings. The Panel also maintained an E-mail account to receive written comments from any source. After the completion of the field hearings, the Panel members met in executive session several times, augmented between meetings by the work of staff to help them (1) gather background information on sourcing trends and challenges, (2) identify sourcing principles and criteria, (3) consider A-76 and other sourcing processes to assess what works and what does not, and (4) assess alternatives to the current sourcing processes. As the Panel began its work, it recognized the need for a set of principles that would provide a framework for sourcing decisions. Those principles, as they were debated and fleshed out, provided an important vehicle for assessing what does or does not work in the current A-76 process, and provided a framework for identifying needed changes in the process. The Panel coalesced around a set of sourcing principles. The principles helped frame the Panel’s deliberations and became a reference point for the Panel’s work. Moreover, the principles were unanimously adopted by the Panel and included as part of the Panel’s recommendations. While each principle is important, no single principle stands alone, and several are interrelated. Therefore, the Panel adopted the principles and their accompanying narrative comments as a package and then used these principles to assess the government’s existing sourcing system and to develop additional Panel recommendations. The Panel believes that federal sourcing policy should: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Support agency missions, goals, and objectives. Be consistent with human capital practices designed to attract, motivate, retain, and reward a high-performing federal workforce. Recognize that inherently governmental and certain other functions should be performed by federal workers. Create incentives and processes to foster high-performing, efficient, and effective organizations throughout the federal government. Be based on a clear, transparent, and consistently applied process. Avoid arbitrary full-time equivalent or other arbitrary numerical goals. Establish a process that, for activities that may be performed by either the public or the private sector, would permit public and private sources to participate in competitions for work currently performed in-house, work currently contracted to the private sector, and new work, consistent with these guiding principles. Ensure that, when competitions are held, they are conducted as fairly, effectively, and efficiently as possible. Ensure that competitions involve a process that considers both quality and cost factors. Provide for accountability in connection with all sourcing decisions. The principles and their accompanying commentary are included in their entirety in appendix III. During our deliberations, the Panel noted that there are some advantages to the current A-76 system. First, A-76 cost comparisons are conducted under an established set of rules, the purpose of which is to ensure that sourcing decisions are based on uniform, transparent, and consistently applied criteria. Second, the A-76 process has enabled federal managers to make cost comparisons between sectors that have vastly different approaches to cost accounting. Third, the current A-76 process has been used to achieve significant savings and efficiencies for the government. Savings result regardless of whether the public or the private sector wins the cost comparison. This is because competitive pressures have served to promote efficiency and improve the performance of the activity studied. Despite these advantages, the Panel also heard frequent criticisms of the A-76 process. The Panel’s report noted that both federal employees and private firms complain that the A-76 competition process does not meet the principles’ standard of a clear, transparent, and consistently applied process. For example, some Federal employees have complained that A-76 cost comparisons have included functions that were inherently governmental and should not have been subject to a cost comparison at all. While OMB guidance exists to help define what functions should be considered inherently governmental, the Panel's third principle recognized that making such determinations remains difficult. Also, others have expressed concern that some government officials in a position to affect contracting decisions may subsequently take positions with winning contractors. In this regard, various legislative provisions exist that place restrictions on former government employees taking positions with winning contractors. Time did not permit the Panel to explore the extent to which additional legislation may be needed in this area. Since January 1999, GAO has issued 25 decisions on protests involving A- 76 cost comparisons. Of these decisions, GAO sustained 11 and denied 14. “Sustaining” a protest means that GAO found that the agency had violated procurement statutes or regulations in a way that prejudiced the protester. Protests involving A-76 represent a very small percentage of the many hundreds of bid protest decisions that GAO issued in the past 3 years. They do, however, indicate an unusually high percentage of sustained protests. In protest decisions covering all procurements, GAO has sustained about one-fifth of the protests, while in A-76 protests GAO has sustained almost half. (It should be kept in mind, though, that most A-76 decisions are not protested, just as most contract award decisions are not protested.) These sustained protests generally reflect only the errors made in favor of the government’s most efficient organization since only the private-sector offeror has the right to protest to GAO. In addition, while any public-private competition is, by nature, challenging and open to some of the concerns that have been raised regarding the A-76 process, the high rate of successful A-76 protests suggests that agencies have a more difficult time applying the A-76 rules than they do applying the normal (i.e., Federal Acquisition Regulation) acquisition rules. At least in part, this may be because the Federal Acquisition Regulation (FAR) rules are so much better known. While training could help overcome this lack of familiarity (and many agencies, particularly those in DOD, have been working on A-76 training), the Panel noted that the FAR acquisition and source selection processes are already better known and better understood; they, in a sense, serve as a “common language” for procurements and source selections. In the Panel’s view, the most serious shortcoming of the A-76 process is that it has been stretched beyond its original purpose, which was to determine the low-cost provider of a defined set of services. Circular A-76 has not worked well as the basis for competitions that seek to identify the best provider in terms of quality, innovation, flexibility, and reliability. This is particularly true in today’s environment, where solutions are increasingly driven by technology and may focus on more critical, complex, and interrelated services than previously studied under A-76. In the federal procurement system today, there is common recognition that a cost-only focus does not necessarily deliver the best quality or performance for the government or the taxpayers. Thus, while cost is always a factor, and often the most important factor, it is not the only factor that may need to be considered. In this sense, the A-76 process may no longer be as effective a tool, since its principal focus is on cost. During its year-long study, the Panel identified several key characteristics of a successful sourcing policy. First, the Panel heard repeatedly about the importance of competition and its central role in fostering economy, efficiency, high performance, and continuous performance improvement. The means by which the government utilizes competition for sourcing its commercial functions was at the center of the Panel’s discussions and work. The Panel strongly supported a continued emphasis on competition as a means to improve economy, efficiency, and effectiveness of the government. The Panel also believed that whenever the government is considering converting work from one sector to another, public-private competitions should be the norm. Direct conversions generally should occur only where the number of affected positions is so small that the costs of conducting a public-private competition clearly would outweigh any expected savings. Moreover, there should be adequate safeguards to ensure that activities, entities, or functions are not improperly separated to reduce the number of affected positions and avoid competition. A second theme identified by the Panel and consistently cited at the public hearings was the need for a broader approach to sourcing decisions, rather than an approach that relies on the use of arbitrary quotas or that is unduly constrained by personnel ceilings. Critical to adopting a broader perspective is having an enterprisewide perspective on service contract expenditures, yet the federal government lacks timely and reliable information about exactly how, where, and for what purposes, in the aggregate, taxpayer dollars are spent for both in-house and contracted services. The Panel was consistently reminded about, and fully agreed with, the importance of ensuring accountability throughout the sourcing process, providing the workforce with adequate training and technical support in developing proposals for improving performance, and assisting those workers who may be adversely affected by sourcing decisions. Improved accountability extends to better monitoring of performance and results after competitions are completed—regardless of the winner. The Panel heard about several successful undertakings involving other approaches to sourcing decisions. Some involved business process reengineering and public-private partnerships, and emphasized labor- management cooperation in accomplishing agency missions. For example, in Indianapolis, Indiana, on August 8, 2001, the Panel heard from representatives from several organizations that had taken different approaches to the sourcing issue. Among them were the Naval Surface Warfare Center in Crane, Indiana, which reengineered its business processes to reduce costs and gain workshare, and the city of Indianapolis, which effectively used competition to greatly improve the delivery of essential services. In doing so, the city also provided certain technical and financial assistance to help city workers successfully compete for work. These entities endeavored to become “most efficient organizations.” It was from these examples and others that the Panel decided that all federal agencies should strive to become “high-performing organizations.” Third, sourcing policy is inextricably linked to the government’s human capital policies. This linkage has many levels, each of which is important. It is particularly important that sourcing strategies support, not inhibit, the government’s efforts to attract, motivate, and retain a high-performing in- house workforce, as well as support its efforts to access and collaborate with high-performance, private-sector providers. Properly addressed, these policies should be complementary, not conflicting. In addition to the principles discussed earlier, the Panel adopted a package of additional recommendations it believed would improve significantly the government’s policies and procedures for making sourcing decisions. It is important to emphasize that the Panel decided to consider and adopt these latter recommendations as a package, recognizing the diverse interests represented on the Panel and the give and take required to reach agreement among a supermajority of the Panelists. As a result, a supermajority of the Panel members recommended the adoption of the following actions: Conduct public-private competitions under the framework of an integrated FAR-based process. The government already has an established mechanism that has been shown to work as a means to identify high-value service providers: the negotiated procurement process of the Federal Acquisition Regulation. The Panel believed that in order to promote a more level playing field on which to conduct public-private competitions, the government needed to shift, as rapidly as possible, to a FAR-type process under which all parties would compete under the same set of rules. Although some changes in the process would be necessary to accommodate the public-sector proposal, the same basic rights and responsibilities would apply to both the private and the public sectors, including accountability for performance and the right to protest. This and perhaps other aspects of the integrated competition process could require changes to current law or regulation (e.g., requirements in title 10 of the U.S. Code that DOD competitive sourcing decisions be based on low cost). Make limited changes to the existing A-76 process. The development of an integrated FAR-type process will require some time to be implemented. In the meantime, the Panel expected current A-76 activities would continue, and therefore believed some modifications to the existing process could and should be made. Accordingly, the Panel recommended a number of limited changes to OMB Circular A-76. These changes would, among other things, strengthen conflict-of-interest rules, improve auditing and cost accounting, and provide for binding performance agreements. Encourage the development of high-performing organizations (HPOs). The Panel recommended that the government take steps to encourage HPOs and continuous improvement throughout the federal government, independent of the use of public-private competitions. In particular, the Panel recommended that the Administration develop a process to select a limited number of functions currently performed by federal employees to become HPOs, and then evaluate their performance. Then, the authorized HPOs would be exempt from competitive sourcing studies for a designated period of time. Overall, however, the HPO process is intended to be used in conjunction with, not in lieu of, public-private competitions. The successful implementation of the HPO concept will require a high degree of cooperation between labor and management, as well as a firm commitment by agencies to provide sufficient resources for training and technical assistance. In addition, a portion of any savings realized by the HPO should be available to reinvest in continuing reengineering efforts and for further training or incentive purposes. Let me speak specifically to the creation of HPOs. Many organizations in the past, for various reasons, have found it difficult to become high- performing organizations. Moreover, the federal government continues to face new challenges in making spending decisions for both the long and near term because of federal budget constraints, rapid advances in technology, the impending human capital crisis, and new security challenges brought on by the events of September 11, 2001. Such a transformation will require that each organization reverse decades of underinvestment and lack of sustained attention to maintaining and enhancing its capacity to perform effectively. The Panel recognized that incentives are necessary to encourage both management and employees to promote the creation of HPOs. It envisioned that agencies would have access to a range of financial and consulting resources to develop their plans, with the costs offset by the savings realized. The Panel’s report focused primarily on HPOs in the context of commercial activities, given its legislative charter. However, there is no reason why the concept could not be applied to all functions, since much of the government’s work will never be subject to competition. HPOs may require some additional flexibility coupled with appropriate safeguards to prevent abuse. The Panel also envisioned the use of performance agreements and periodic performance reviews to ensure appropriate transparency and accountability. Although a minority of the Panel did not support the package with the three additional recommendations noted above, some of them indicated that they supported one or more elements of the package. Importantly, there was a good faith effort, even at the last minute of the report’s preparation, to maximize agreement and minimize differences among Panelists. In fact, changes were made even when it was clear that some Panelists seeking changes were highly unlikely to vote for the supplemental package of recommendations. As a result, on the basis of Panel meetings and my personal discussions with Panel members at the end of our deliberative process, the major differences among Panelists were few in number and philosophical in nature. Specifically, disagreement centered primarily on (1) the recommendation related to the role of cost in the new FAR-type process and (2) the number of times the Congress should be required to act on the new integrated process, including whether the Congress should specifically authorize a pilot program that tests that process for a specific time period. Many of the Panel’s recommendations can be accomplished administratively under existing law, and the Panel recommended that they be implemented as soon as practical. The Panel also recognized that some of its recommendations could require changes in statutes or regulations and that making the necessary changes would take some time. Any legislative changes should be approached in a comprehensive and considered manner rather than a piecemeal fashion in order for a reasonable balance to be achieved. Like the guiding principles, the other recommendations were the result of much discussion and compromise and should be considered as a whole. Moreover, although the Panel viewed the use of a FAR-type process for conducting public-private competitions as the end state, the Panel also recognized that some elements of its recommendations represent a shift in current procedures for the federal government. Therefore, the Panel’s report outlined the following phased implementation strategy that would allow the federal government to demonstrate and then refine its sourcing policy on the basis of experience: A-76 studies currently under way or initiated during the near term should continue under the current framework. Subsequent studies should be conducted in accordance with the improvements listed in the report. OMB should develop and oversee the implementation of a FAR-type, integrated competition process. In order to permit this to move forward expeditiously, it may be advisable to limit the new process initially to civilian agencies, where its use would not require legislation. Statutory provisions applying only to DOD agencies may require repeal or amendment before the new process could be used effectively at DOD, and the Panel recommended that any legislation needed to accommodate the integrated process in DOD be enacted as soon as possible. As part of a phased implementation and evaluation process, the Panel recommended that the integrated competition process be used in a variety of agencies and in meaningful numbers across a broad range of activities, including work currently performed by federal employees, work currently performed by contractors, and new work. Within 1 year of initial implementation of the new process, and again 1 year later, the Director of OMB should submit a detailed report to the Congress identifying the costs of implementing the new process, any savings expected to be achieved, the expected gains in efficiency or effectiveness of agency programs, the impact on affected federal employees, and any lessons learned as a result of the use of this process together with any recommendations for appropriate legislation. GAO would review each of these OMB reports and provide its independent assessment to the Congress. The Panel anticipated that OMB would use the results of its reviews to make any needed “mid-course corrections.” On the basis of the results generated during the demonstration period, and on the reports submitted by OMB and GAO, the Congress will then be in a position to determine the need for any additional legislation. The federal government is in a time of transition, and we face a range of challenges in the 21st century. This will require the federal government to transform what it does, the way that it does business, and who does the government’s business. This may require changes in many areas, including human capital and sourcing strategies. On the basis of the statutory mandate, the Commercial Activities Panel primarily focused on the sourcing aspects of this needed transformation. I supported the adoption of the set of principles as well as the package of additional recommendations contained in the Panel’s report. Overall, I believe that the findings and recommendations contained in the Panel’s report represent a reasoned, reasonable, fair, and balanced approach to addressing this important, complex, and controversial area. I hope that the Congress and the Administration will continue to consider and act on this report and its recommendations. I particularly want to urge the Congress and the Administration to consider the importance of encouraging agencies to become high-performing organizations on an ongoing basis. Agencies should not wait until faced with the challenge of public-private competitions to seek efficiencies to retain work in-house. In addition, most of the government’s workers will never be subject to competitions. As a result, I believe that the Panel’s recommendation pertaining to high- performing organizations could be an important vehicle for fostering much needed attention to how we enhance the economy, efficiency, and effectiveness of the federal government in ways other than through competition. Finally and most importantly, in considering the Panel’s package of recommendations or any other changes that may be considered by the Congress and the Administration, the guiding principles, developed and unanimously agreed upon by the Panel, should be the foundation for any future action. Let me also add that I appreciate the hard work of my fellow Panelists and their willingness to engage one another on such a tough issue—one where we found much common ground despite a range of divergent views. I also want to thank the GAO staff and the other support staff who contributed to this effort. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the subcommittee may have. David M. Walker, Chairman Comptroller General of the United States E. C. “Pete” Aldridge, Jr. Under Secretary of Defense for Acquisition, Technology and Logistics Frank A. Camm, Jr. Senior Analyst, RAND Mark C. Filteau President, Johnson Controls World Services, Inc. Washington, D.C., June 11, 2001 “Outsourcing Principles and Criteria” Status quo is not acceptable to anyone. Sourcing decisions require a strategic approach. Federal workers should perform core government functions. Need for MEOs throughout the government. Government needs clear, transparent, and consistently applied sourcing criteria. Avoid arbitrary FTE goals. Objective should be to provide quality services at reasonable cost. Provide for fair and efficient competition between the public and private sectors. Sourcing decisions require appropriate accountability. Indianapolis, Indiana, August 8, 2001 “Alternatives to A-76” Crane Naval Surface Warfare Center’s reengineering process led to significant efficiencies and reduced workforce trauma. Employees must be involved with any reform effort. Secrecy is counterproductive. Committed leadership, effective implementation, and well-planned workforce transition strategies are key to any reform effort. Privatization-in-place was used effectively at Indianapolis Naval Air Warfare Center to avert a traditional Base Realignment and Closure action. The city of Indianapolis provided certain technical and financial assistance to help workers successfully compete for the work. Certain technology upgrades in Monterey, California, via a public–private partnership led to efficiencies and increased effectiveness. Measuring performance is critical. A-76 is only one of many efficiency tools available to federal managers. Other tools include Bid to goal, which helps units become efficient and thus avoid A-76, Transitional Benefit Corporation, a concept that promotes the transfer of government assets to the private sector and provides transition strategies for employees, and ESOP, under which employees own a piece of the organization that employs them. ESOPs have been established in a few federal organizations. San Antonio, Texas, August 15, 2001 “A-76: What’s Working and What’s Not” A-76 process is too long and too costly. Cost of studies can greatly reduce government savings. Cost to industry in both dollars and uncertainty. Demoralized workers quit. But successful contractors need these workers. Larger A-76 studies can yield greater savings, but these studies become much more complex. Lack of impetus for savings without competition. One-step bidding process should be used. MEO and contractors should Compete together in one procurement action, Be evaluated against the same solicitation requirements using the same Be awarded contracts based on best value. Provide more training for MEO and A-76 officials. MEOs should have legal status to protest and appeal awards and obtain bid information. A-76 rules should be more clear and applied consistently through a centralized management structure. For bid and monitoring purposes, government costs should be collected and allocated consistent with industry (e.g., activity-based costing). Need to eliminate any suggestion of conflicts of interest. Need incentives for agencies and workers (e.g., share-in-savings). Provide soft landings for workers. Allow workers to form public-sector organizations for bidding. Based on public input, a review of previous studies and other relevant literature, and many hours of deliberation, the Panel developed and unanimously adopted a set of principles that it believes should guide sourcing policy for the federal government. While each principle is important, no single principle stands alone. As such, the Panel adopted the principles as a package. The Panel believes that federal sourcing policy should: 1. Support agency missions, goals, and objectives. Commentary: This principle highlights the need for a link between the missions, goals, and objectives of federal agencies and related sourcing policies. 2. Be consistent with human capital practices designed to attract, motivate, retain, and reward a high-performing federal workforce. Commentary: This principle underscores the importance of considering human capital concerns in connection with the sourcing process. While it does not mean that agencies should refrain from outsourcing due to its impact on the affected employees, it does mean that the federal government’s sourcing policies and practices should consider the potential impact on the government’s ability to attract, motivate, retain, and reward a high-performing workforce both now and in the future. Regardless of the result of specific sourcing decisions, it is important for the workforce to know and believe that they will be viewed and treated as valuable assets. It is also important that the workforce receive adequate training to be effective in their current jobs and to be a valuable resource in the future. 3. Recognize that inherently governmental and certain other functions should be performed by federal workers. The sourcing principles were taken in their entirety from Commercial Activities Panel, Improving the Sourcing Decisions of Government: Final Report (Washington, D.C.: April 2002). the Federal Activities Inventory Reform (FAIR) Act has helped to identify commercial work currently being performed by the government. It is clear that government workers need to perform certain warfighting, judicial, enforcement, regulatory, and policymaking functions, and the government may need to retain an in-house capability even in functions that are largely outsourced. Certain other capabilities, such as adequate acquisition skills to manage costs, quality, and performance and to be smart buyers of products and services, or other competencies such as those directly linked to national security, also must be retained in-house to help ensure effective mission execution. 4. Create incentives and processes to foster high-performing, efficient, and effective organizations throughout the federal government. Commentary: This principle recognizes that, historically, it has primarily been when a government entity goes through a public-private competition that the government creates a “most efficient organization” (MEO). Since such efforts can lead to significant savings and improved performance, they should not be limited to public-private competitions. Instead, the federal government needs to provide incentives for its employees, its managers, and its contractors to constantly seek to improve the economy, efficiency, and effectiveness of the delivery of government services through a variety of means, including competition, public-private partnerships, and enhanced worker-management cooperation. 5. Be based on a clear, transparent, and consistently applied process. Commentary: The use of a clear, transparent, and consistently applied process is key to ensuring the integrity of the process as well as to creating trust in the process on the part of those it most affects: federal managers, users of the services, federal employees, the private sector, and the taxpayers. 6. Avoid arbitrary full-time equivalent (FTE) or other arbitrary numerical goals. Commentary: This principle reflects an overall concern about arbitrary numbers driving sourcing policy or specific sourcing decisions. The success of government programs should be measured by the results achieved in terms of providing value to the taxpayer, not the size of the in- house or contractor workforce. Any FTE or other numerical goals should be based on considered research and analysis. The use of arbitrary percentage or numerical targets can be counterproductive. 7. Establish a process that, for activities that may be performed by either the public or the private sector, would permit public and private sources to participate in competitions for work currently performed in- house, work currently contracted to the private sector, and new work, consistent with these guiding principles. Commentary: Competitions, including public-private competitions, have been shown to produce significant cost savings for the government, regardless of whether a public or a private entity is selected. Competition also may encourage innovation and is key to improving the quality of service delivery. While the government should not be required to conduct a competition open to both sectors merely because a service could be performed by either public or private sources, federal sourcing policies should reflect the potential benefits of competition, including competition between and within sectors. Criteria would need to be developed, consistent with these principles, to determine when sources in either sector will participate in competitions. 8. Ensure that, when competitions are held, they are conducted as fairly, effectively, and efficiently as possible. Commentary: This principle addresses key criteria for conducting competitions. Ineffective or inefficient competitions can undermine trust in the process. The result may be, for private firms (especially smaller businesses), an unwillingness to participate in expensive, drawn-out competitions; for federal workers, harm to morale from overly long competitions; for federal managers, reluctance to compete functions under their control; and for the users of services, lower performance levels and higher costs than necessary. Fairness is critical to protecting the integrity of the process and to creating and maintaining the trust of those most affected. Fairness requires that competing parties, both public and private, or their representatives, receive comparable treatment throughout the competition regarding, for example, access to relevant information and legal standing to challenge the way a competition has been conducted at all appropriate forums, including the General Accounting Office and the United States Court of Federal Claims. 9. Ensure that competitions involve a process that considers both quality and cost factors.
The Commercial Activities Panel is a congressionally mandated panel to study and make recommendations for improving the policies and procedures governing the transfer of commercial activities from government to contractor personnel. The growing controversy surrounding competitions, under the Office of Management and Budget's Circular A-76 to determine whether the government should obtain commercially available goods and services from the public or private sectors, led to the establishment of this Panel. In establishing the Panel, several steps were taken to ensure representation from all major stakeholders as well as to ensure a fair and balanced process. To ensure a broad range of views on the Panel, a Federal Register notice was used to seek suggestions for the Panel's composition. As the Panel began its work, it recognized the need for a set of principles for sourcing decisions. These principles provide for an assessment of what does or does not work in the current A-76 process and provide a framework for identifying needed changes. Many of the Panel's recommendations can be accomplished administratively under existing law, and the Panel recommends that they be implemented as soon as practical. The Panel also recognizes that some of the recommendations would require changes in statutes or regulations that could take some time. Any legislative changes should be comprehensive and considered to achieve a reasonable balance.
Security clearances are required for access to certain national security information, which is classified at one of three levels: top secret, secret, or confidential. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably cause to national security. Executive Order 10450, which was originally issued in 1953, makes the heads of departments or agencies responsible for establishing and maintaining effective programs for ensuring that civilian employment and retention is clearly consistent with the interests of the national security. Agency heads are also responsible for designating positions within their respective agencies as sensitive if the occupant of that position could, by virtue of the nature of the position, bring about a material adverse effect on national security. In addition, Executive Order 12968, issued in 1995, is relevant to position designation because the order also makes the heads of agencies—including executive branch agencies and the military departments—responsible for establishing and maintaining an effective program to ensure that access to classified information by each employee is clearly consistent with the interests of national security. This order also states that, subject to certain exceptions, eligibility for access to classified information shall only be requested and granted on the basis of a demonstrated, foreseeable need for access. Further, part 732 of Title 5 of the Code of Federal Regulations provides requirements and procedures for the designation of national security positions, which include positions that (1) involve activities of the government that are concerned with the protection of the nation from foreign aggression or espionage, and (2) require regular use of or access to classified national security information. In addition, part 732 states that most federal government positions that could bring about, by virtue of the nature of the position, a material adverse effect on national security must be designated as a sensitive position and require a sensitivity level designation. The sensitivity level designation determines the type of background investigation required, with positions designated at a greater sensitivity level requiring a more extensive background investigation. Part 732 establishes three sensitivity levels—special-sensitive, critical-sensitive, and noncritical-sensitive— which are described in figure 1. According to OPM, positions that an agency designates as special-sensitive and critical-sensitive require a background investigation that typically results in a top secret clearance. Noncritical-sensitive positions typically require an investigation that supports a secret or confidential clearance. OPM also defines non- sensitive positions that do not have a national security element, but still require a designation of risk for suitability purposes. That risk level determines the type of investigation required for those positions. Those investigations include aspects of an individual’s character or conduct that may have an effect on the integrity or efficiency of his or her service. The personnel security clearance process begins when a human resources or security professional determines a position’s level of sensitivity, which includes consideration of whether or not a position requires access to classified information and, if required, the level of access. DHS and DOD follow a general process for determining whether a federal civilian position requires access to classified information, which informs whether a position requires a security clearance. This process is described in figure 1 below and is based on our review of the corresponding guidance and testimonial evidence gathered during interviews with DHS and DOD officials. In addition, a more thorough description of DHS and DOD component-level policies appears in appendix II. The personnel security clearance process is further described in appendix III. The increased demand for personnel with security clearances following the events of September 11, 2001, led GAO and others to identify delays and incomplete documentation in the security clearance process. In light of these concerns, Congress passed the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA), which set objectives and established requirements for improving the clearance process, including improving the timeliness of the clearance process, achieving interagency reciprocity, establishing an integrated database to track investigative and adjudicative information, and evaluating available technology for investigations and adjudications. Executive Order 13467 calls for investigations of suitability and security to be aligned using consistent standards, to the extent practicable. predecessor that continues to focus on the reform effort—detailing reform-related plans, including a February 2010 strategic framework that established goals, performance measures, roles and responsibilities, and proposed metrics for determining the quality of security clearance investigations and adjudications. Those reports contained a reform plan that outlined a new seven-step process for end-to-end suitability and security clearance reform, see figure 2 below. According to ODNI officials, the first step—to “validate need,”—focuses on ensuring that the sensitivity level of positions is designated appropriately on the basis of mission needs, among other things. Separate from, but related to, security clearances are determinations of suitability that the executive branch uses to ensure individuals are suitable, based on character and conduct, for federal employment in their agency or position. Suitability requirements sometimes overlap with national security requirements. For example, the Department of Justice checks suitability to ensure that applicants for jobs with the Drug Enforcement Agency have never used illegal drugs. In addition, Health and Human Services checks the suitability of applicants for jobs working with children. Similarly, the Intelligence Community requires polygraph evaluations, among other things, to determine suitability for most intelligence positions. OPM was involved in many aspects of the suitability investigation process under Part 731 of Title 5 of the Code of Federal Regulations, prior to the issuance of Executive Order 13467 and, as the Suitability Executive Agent, the Director continues to be responsible for developing and implementing uniform and consistent policies and procedures to ensure the effective, efficient, and timely completion of background investigations and adjudications relating to determinations of suitability. In contrast, the DNI was assigned a new role. Executive Order 13467 states that the DNI, as the Security Executive Agent, is responsible for, among other things, developing uniform and consistent policies and procedures to ensure the effective, efficient, and timely completion of background investigations and adjudications relating to determinations of eligibility for access to classified information or eligibility to hold a sensitive position. In addition to these responsibilities, the Executive Order also provides the DNI the authority to issue guidelines and instructions to the heads of agencies to ensure appropriate uniformity, centralization, efficiency, effectiveness, and timeliness in processes relating to determinations by agencies of eligibility for access to classified information or eligibility to hold a sensitive position. The order also states that the Performance Accountability Council is responsible for ensuring that the Executive Agents align their respective processes. Finally, the order states that agency heads should implement any policy or procedure developed by either the Performance Accountability Council or Executive Agents under the order. The DNI, in the capacity as Security Executive Agent responsible for developing uniform and consistent policies related to the security clearance process, has expressed intent to issue guidance relating to national security positions. However, the DNI has not provided agencies with clearly defined policy through regulation or other guidance to help ensure that executive branch agencies use appropriate and consistent criteria when determining if positions require a security clearance. Instead, executive branch agencies are using a position designation tool developed by OPM. This tool is designed to determine the sensitivity level of civilian positions which, in turn, informs the type of background investigation needed if a clearance is warranted. The DNI, however, did not have a role in its development even though the two Executive Agents are to align their respective processes. As a result, agency officials we met expressed mixed views on the effectiveness of the tool for national security positions. According to Executive Order 13467, issued in June 2008, the DNI, as the Security Executive Agent, is responsible for developing uniform and consistent policies and procedures for determinations of eligibility for access to classified information or to hold a sensitive position. Further, the executive order states that agency heads shall assist the Performance Accountability Council and Executive Agents in carrying out any function under the order, which includes implementing any policies or procedures developed pursuant to the order. Although agency heads retain the flexibility to make determinations regarding which positions in their agency require a security clearance, the DNI is well positioned, by virtue of its role as the Security Executive Agent, to provide guidance to help align the process from agency to agency. The DNI, however, has not provided agencies with clearly defined policy or instructions. To assist with position designation, the Director of OPM—the Executive Agent for Suitability—has developed a process that includes a position designation system and corresponding automated tool to guide agencies in determining the proper sensitivity level for the majority of federal This tool—namely, the Position Designation of National positions.Security and Public Trust Positions—enables a user to evaluate a position’s national security and suitability requirements so as to determine a position’s sensitivity and risk levels, which in turn dictate the type of background investigation that will be required for the individual who will occupy that position. In most agencies outside the Intelligence Community, OPM conducts the background investigations for both suitability and security clearance purposes. The tool does not directly determine whether a position requires a clearance, but rather helps determine the sensitivity level of the position. The determination to grant a clearance is based on whether a position requires access to classified information or other relevant factors, and, if access is required, the responsible official will designate the position to require a clearance. OPM developed the position designation system and automated tool for multiple reasons. First, OPM determined through a 2007 initiative that its existing regulations and guidance for position designation were complex and difficult to apply, resulting in inconsistent designations. As a result of a recommendation from the initiative, OPM created a simplified position designation process in 2008. Additionally, OPM officials noted that the tool is to support the goals of the security and suitability reform efforts, which require proper designation of national security and suitability positions. OPM first introduced the automated tool in November 2008, and issued an update of the tool in 2010. In August 2010, OPM issued guidance (1) recommending all agencies that request OPM background investigations use the tool and (2) requiring agencies to use the tool for all positions in the competitive service, positions in the excepted service where the incumbent can be noncompetitively converted to the competitive service, and career appointments in the Senior Executive Service. Both DHS and DOD components use the tool. A DHS instruction requires personnel to designate all DHS positions by using OPM’s position sensitivity designation guidance, which is the basis of the tool. In addition, DOD issued guidance in September 2011 requiring its personnel to use OPM’s tool to determine the proper position sensitivity designation for new or vacant positions, including the establishment and reclassification of positions. ODNI officials told us that they believe OPM’s tool is useful for determining a position’s sensitivity level. However, despite the DNI’s responsibility for policy related to ensuring uniformity in the security clearance process, ODNI officials noted that the DNI did not have input into recent revisions of OPM’s position designation tool. DHS Management Instruction 121-01-007, Department of Homeland Security Personnel Suitability and Security Program (June 2009). the DNI as Executive Agents are still evolving, although Executive Order 13467 defines responsibilities for each Executive Agent. Accordingly, we found that the Director of OPM and the DNI have not fully collaborated in executing their respective roles in the process for determining position designations. For example, OPM has had long-standing responsibility for establishing standards with respect to suitability for most federal government positions. Accordingly, the sections of the tool to be used for evaluating a position’s suitability risk level are significantly more detailed than the sections designed to aid in designating the national security sensitivity level of the position. While most of OPM’s position designation system, which is the basis of the tool, is devoted to suitability issues, only two pages are devoted to national security issues, despite the reference to national security in its title. Moreover, OPM did not seek to collaborate with the DNI when updating the tool in 2010. Similarly, in 2010, OPM initiated revisions to the part of the Code of Federal Regulations that According to OPM and ODNI pertain to national security positions.officials, the revision is expected to clarify the standards for designating whether federal positions are national security sensitive, which will help agencies more accurately assess the sensitivity of a position. The sensitivity level includes consideration of whether a position is eligible for access to classified information and the level of access. Further, the revision is currently expected to update the definition of national security positions to include positions that could have a material impact on national security, but might not clearly fall within the current definition in part 732 of Title 5 of the Code of Federal Regulations. For example, such positions include those with duties that involve the protection of borders, ports, and critical infrastructure, as well as those with responsibilities related to public safety, law enforcement, and the protection of government information systems. During our review, human capital and security officials from DHS and DOD and the selected components affirmed that they were using the existing tool to determine the sensitivity level required by a position. However, in the absence of clearly defined policy from the DNI and the lack of collaborative input into the tool’s design, officials explained that they sometimes had difficulty in using the tool to designate the sensitivity level of national security positions. OPM regularly conducts audits of its executive branch customer agency personnel security and suitability programs, which include a review of position designation to assess the agencies’ alignment with OPM’s position designation guidance. In the audit reports we obtained, OPM found examples of inconsistency between agency position designation and OPM guidance, both before and after the implementation of OPM’s tool. For instance, prior to the implementation of the tool, in a 2006 audit of an executive branch agency, OPM found that its sensitivity designations differed from the agency’s designation in 13 of 23 positions. Specifically, OPM concluded that 11 positions were underdesignated, 1 position was overdesignated, and 1 position was adjusted. More recently, after the implementation of the tool, in an April 2012 audit of a DOD agency, OPM assessed the sensitivity levels of 39 positions, and OPM’s designations differed from the agency’s designations in 26 of those positions. In the April 2012 report, the DOD agency agreed with OPM’s recommendations related to position designation, and the audit report confirmed that the agency had submitted evidence of corrective action in response to the position designation recommendations. OPM provided us with the results of 10 audits that it had conducted between 2005 and 2012, and 9 of those audit reports reflected inconsistencies between OPM position designation guidance and determinations of position sensitivity conducted by the agency. OPM officials noted, however, that they do not have the authority to direct agencies to make different designations because Executive Order 10450 provides agency heads with the ultimate responsibility for designating which positions are sensitive positions. As of May 2012, the Naval Audit Service is currently finalizing its own internal audit on its top secret requirements determination process for civilian positions. While the results were not complete at the time of our review, officials explained to us that they began this audit to validate their top secret requirements and ensure that they have effective internal controls over their designation process. DHS and DOD officials expressed varying opinions regarding the tool. For instance, some of the officials we met raised concerns regarding the guidance provided through the tool and expressed that they had difficulty implementing it. Specifically, officials from DHS’s U.S. Immigration and Customs Enforcement stated that the use of the tool occasionally resulted in inconsistency, such as over- or underdesignating a position, and expressed a need for additional clear, easily interpreted guidance on designating national security positions. DOD officials stated that they have had difficulty implementing the tool because it focuses more on suitability than security, and the national security aspects of DOD’s positions are of more concern to them than the suitability aspects. Further, an official from DOD’s Office of the Under Secretary of Defense for Personnel and Readiness stated that the tool and DOD policy do not always align and that the tool does not cover the requirements for some DOD positions. For example, DOD’s implementing guidance on using the tool states that terms differ between DOD’s personnel security policy and the tool, and the tool might suggest different position sensitivity levels than DOD policy requires. Also, officials from the Air Force Personnel Security Office told us that they had challenges using the tool to classify civilian positions, including difficulty in linking the tool with Air Force practices for position designation. Moreover, an Air Force official stated a concern that the definition for national security positions is broadly written and could be considered to include all federal positions. Further, individuals responsible for making position designation determinations can easily reach different conclusions. For instance, officials from DHS’s U.S. Immigration and Customs Enforcement stated that the tool is not necessarily intuitive and users of the tool need to understand its nuances in order to avoid overdesignating a position. Conversely, officials from the U.S. Coast Guard stated that they found the tool to be intuitive, and that it helps to ensure consistency in designation. Finally, officials from the Transportation Security Administration noted that the tool is user friendly and provides consistency for managers. Recently, we have seen indications that the Executive Agents are working to align their respective processes. According to OPM’s website, OPM has conferred with the Office of Management and Budget (OMB) concerning the possibility of reissuing pertinent sections of the Code of Federal Regulations jointly with ODNI, with a targeted issuance before the end of the 2012 calendar year. ODNI officials also stated their intention to work with OPM on the revision effort. ODNI officials further acknowledged that they are collaborating with OPM to reach agreement on their respective roles as Executive Agents. Our prior work has found that two or more agencies with related goals can benefit from enhancing their collaboration in various areas to achieve common outcomes. According to Executive Order 12968, the number of employees that each agency determines is eligible for access to classified information shall be kept to the minimum required, and, subject to certain exceptions, eligibility shall be requested or granted only on the basis of a demonstrated, foreseeable need for access. Additionally, Executive Order 12968 states that access to classified information shall be terminated when an employee no longer has a need for access, and that requesting or approving eligibility for access in excess of the actual requirements is prohibited. Also, Executive Order 13467 authorizes the DNI to issue guidelines or instructions to the heads of agencies regarding, among other things, uniformity in determining eligibility for access to classified information. However, the DNI has not issued policies and procedures for agencies to review and revise or validate the existing clearance requirements for their federal civilian positions to ensure that clearances are kept to a minimum and reserved only for those positions with security clearance requirements that are in accordance with the national security needs of the time. As previously noted, OPM published a December 2010 notice in the Federal Register of a proposed revision to the Code of Federal Regulations to clarify the policy for designating national security positions. Again, as we previously noted, OPM’s website states that OPM has conferred with OMB concerning the possibility of reissuing pertinent sections of the Code of Federal Regulations jointly with ODNI. One feature of the proposed revision would require all federal agencies to conduct a onetime review of position descriptions and requirements over a period of 2 years to ensure that all positions are properly designated using the revision’s updated definition for national security positions. Position descriptions not only identify the major duties and responsibilities of the position, but they also play a critical role in recruitment, training, and performance management, among other things. While position descriptions may change, so can the national security environment as previously observed. During our review of several DHS and DOD components, we found that officials were aware of the need to keep the number of security clearances to a minimum but were not always subject to a requirement to review and validate the security clearance needs of existing positions on a periodic basis. We found, instead, that agencies’ policies provide for a variety of practices for reviewing the clearance needs of federal civilian positions. According to DHS guidance, supervisors are responsible for ensuring that (1) position designations are updated when a position undergoes major changes (e.g., changes in missions and functions, job responsibilities, work assignments, legislation, or classification standards), and (2) position security designations are assigned as new positions are created. Some components have additional requirements to review position designation more regularly to cover positions other than those newly created or vacant. For example, U.S. Coast Guard guidance states that hiring officials and supervisors should review position descriptions even when there is no vacancy and, as appropriate, either revise or review them. According to officials in U.S. Immigration and Customs Enforcement, supervisors are supposed to review position descriptions annually during the performance review process to ensure that the duties and responsibilities on the position description are up-to-date and accurate. However, officials stated that U.S. Immigration and Customs Enforcement does not have policies or requirements in place to ensure any particular level of detail in that review. DOD’s personnel security regulation and other guidancecomponents with criteria to consider when determining whether a position is sensitive or requires access to classified information, and some of the components also have developed their own guidance. provides DOD An Air Force Instruction requires commanders to review all military and civilian position designations annually to ensure proper level of access to classified information. The Army issued a memorandum in 2006 that required an immediate review of position sensitivity designations for all Army civilian positions by the end of the calendar year and requires subsequent reviews biennially. That memorandum further states that if a review warrants a change in position sensitivity affecting an individual’s access to classified information, then access should be administratively adjusted and the periodic reinvestigation submitted accordingly. However, officials explained that improper position sensitivity designations continue to occur in the Army because they have a limited number of personnel in the security office relative to workload, and they only spot check clearance requests to ensure that they match the level of clearance required. Officials from DOD’s Washington Headquarters Services told us that they have an informal practice of reviewing position descriptions and security designations for vacant or new positions, but they do not have a schedule for conducting periodic reviews of personnel security designations for already-filled positions. These various policies notwithstanding, agency officials told us that they are implemented inconsistently. Some of the components we met were in the process of conducting a onetime review of position designation during our review. For example, Transportation Security Administration officials stated that they reevaluated all of their position descriptions over the last 2 years because the agency determined that the re-evaluation of its position designations would improve operational efficiency by ensuring that positions were appropriately designated by using OPM’s updated position designation tool. Further, those officials told us that they review position descriptions as positions become vacant or are created. Between fiscal years 2010 and 2011, while the Transportation Security Administration’s overall workforce increased from 61,586 to 66,023, the number of investigations for top secret clearances decreased from 1,483 to 1,127. In March 2011, the Naval Audit Service begin an audit of its top secret requirements determination process for civilian positions at selected activities to verify that civilian top secret clearances are based on valid requirements and that effective internal controls over the top secret requirements determination process are in place. According to a Navy official, the results of the audit were still undergoing the Navy’s internal review process as of May 2012. There is a cost to conducting background investigations, and a potential for dollar savings when overdesignated positions are identified. DHS and DOD officials acknowledged to us that overdesignating a position can result in expenses for unnecessary investigations. When a position is overdesignated, additional resources are unnecessarily spent conducting the investigation and adjudication of a background investigation that exceeds agency requirements. As stated earlier in this report, the investigative workload for a top secret clearance is about 20-times greater than that of a secret clearance because it must be periodically reinvestigated twice as often as secret clearance investigations (every 5 years versus every 10 years) and requires 10 times as many investigative staff hours. The fiscal year 2012 base price for a top secret clearance investigation conducted by OPM is $4,005 and the periodic reinvestigation is $2,711, while the base price of an investigation for a secret clearance is $260. Further, the base price of a Moderate Risk Background Investigation—most commonly used by DHS, according to officials—is $752. However, we did not find policies in which position designation reviews were linked to the position holders’ periodic reinvestigations. In contrast, underdesignating a position carries security risks, such as the potential release of classified information or the placement of a person in a position for which they have not been properly cleared. Agencies employ varying practices because the DNI has not established a requirement that executive branch agencies consistently review and revise or validate existing position designations on a recurring basis. Such a recurring basis could include reviewing position designations during the periodic reinvestigation process. Without a requirement to consistently review, revise, or validate existing security clearance position designations, executive branch agencies—such as DHS and DOD—may be hiring and budgeting for both initial and periodic security clearance investigations using position descriptions and security clearance requirements that no longer reflect national security needs. Finally, since reviews are not being done consistently, DHS and DOD and other executive branch agencies cannot have reasonable assurances that they are keeping to a minimum the number of positions that require security clearances on the basis of a demonstrated and foreseeable need for access. Executive Order 13467, issued in June 2008, established a Suitability and Security Clearance Performance Accountability Council and appointed the DNI as the Security Executive Agent and the Director of OPM as the Suitability Executive Agent. However, while the order gives the Executive Agents the authority to issue policy, the DNI has not provided executive branch agencies with clearly defined policy and procedures for determining whether federal civilian positions require a security clearance. Until the DNI articulates such policy and procedures, executive branch agencies, such as DHS and DOD, will not have a foundation on which to build consistent and uniform policies. Further, Executive Order 13467 indicates that executive branch policies and procedures relating to, among other things, suitability and eligibility for access to classified information shall be aligned using consistent standards to the extent possible. However, OPM updated its position designation tool in 2010 without input from the DNI. Without collaborative input from both OPM and DNI in future revisions to the tool, executive branch agencies will continue to risk making security clearance determinations that are inconsistent or at improper levels. Finally, while Executive Order 12968 says that clearances should, subject to certain exceptions, be granted only on the basis of a demonstrated need for access and kept to a minimum, the DNI has not issued guidance that requires agencies to review and revise or validate their existing federal civilian position designations. Until the DNI does so, DHS and DOD, along with other executive branch agencies, cannot have reasonable assurances that all security clearance designations are correct, which could compromise national security if positions are underdesignated, or create unnecessary and costly investigative coverage if positions are overdesignated. We recommend that the DNI, in coordination with the Director of OPM and other executive branch agencies as appropriate, issue clearly defined policy and procedures for federal agencies to follow when determining if federal civilian positions require a security clearance. In addition, we recommend that, once the policy and procedures are issued, the DNI and the Director of OPM collaborate in their respective roles as Executive Agents to revise the position designation tool to reflect that guidance. Finally, we recommend that the DNI, in coordination with the Director of OPM and other executive branch agencies as appropriate, issue guidance to require executive branch agencies to periodically review and revise or validate the designation of all federal civilian positions. We provided a draft of this report to ODNI, OPM, DHS, and DOD for comment. Written comments from ODNI, OPM, and DHS are reprinted in their entirety in appendices IV, V, and VI respectively. Technical comments were provided separately by ODNI, OPM, and DHS, and were incorporated as appropriate. DOD concurred with the report without written comment. We also provided a draft of the report to OMB for information purposes. In commenting on this report, ODNI stated that the report is a fair assessment of existing executive branch policies for determining security clearance requirements for federal civilian positions. The DNI has a lead or collaborative role in our recommendations, and ODNI concurred with all three. First, ODNI concurred with our recommendation that the DNI, in coordination with the Director of OPM and other executive branch agencies as appropriate, issue clearly defined policy and procedures for federal agencies to follow when determining if federal civilian positions require a security clearance. ODNI agreed that executive branch agencies require simplified and uniform policy guidance to assist in determining appropriate sensitivity designations, and cited steps it is taking in coordination with OPM, DOD, and OMB. Specifically, ODNI acknowledged its work with OMB and OPM to jointly issue revisions to part 732 of Title 5 of the Code of Federal Regulations by the end of 2012. Second, ODNI concurred with our recommendation that, once the policy and procedures are issued, the DNI coordinate with the Director of OPM to revise the position designation tool to reflect that guidance. ODNI stated that it plans to work with OPM and other executive branch agencies through the Security Executive Agent Advisory Committee to develop a position designation tool that provides detailed descriptions of the types of positions where the occupant could bring about a material adverse impact to national security due to the duties and responsibilities of the position. ODNI stated its belief that a tool that provides agencies with detailed descriptions of this type will bring about greater uniformity across the government in agency position designations. Third, ODNI concurred with our recommendation that the DNI, in coordination with the Director of OPM and other executive branch agencies as appropriate, issue guidance to require executive branch agencies to periodically review and revise or validate the designation of all federal civilian positions. ODNI agreed with our assessment that the duties and responsibilities of federal positions may be subject to change, and stated that it plans to work with OPM and other executive branch agencies through the Security Executive Agent Advisory Committee to ensure that position designation policies and procedures include a provision for periodic reviews. While ODNI recognized that the emphasis of this report is on civilian positions that require access to classified information, it wished to emphasize that the DNI’s role as Security Executive Agent under Executive Order 13467 applies to all sensitive positions, and that positions that require access to classified information are a subset of all sensitive positions. ODNI stated that any guidance issued by the Security Executive Agent will cover all sensitive positions and associated investigative standards and adjudicative guidelines. OPM also commented on all three of the recommendations in this report in its written comments. OPM concurred with our second recommendation, which is addressed more directly to OPM, that its Director collaborate with the DNI in their respective roles as executive agents to revise the position designation tool to reflect updated federal position designation guidance. OPM stated that it committed to doing so in a February 2010 strategic framework document which was executed by officials within OMB, OPM, DOD, and ODNI. OPM also acknowledged that any revisions to the tool need to await final action with respect to proposed position designation regulations, which is consistent with our recommendation. In addition, OPM summarized executive orders that describe its authority. OPM also supported our third recommendation that the DNI, in coordination with the Director of OPM and other executive branch agencies as appropriate, issue guidance to require executive branch agencies to periodically review and revise or validate the designation of all federal civilian positions. OPM stated that it would be pleased to work with the DNI on guidance concerning periodic reviews of existing designations. While ODNI concurred with our first recommendation—that the DNI, in coordination with the Director of OPM and other executive branch agencies as appropriate, issue clearly defined policy and procedures for federal agencies to follow when determining whether federal civilian positions require a security clearance—OPM stated that it is not clear to OPM that it has a significant role in prescribing the policy and procedures for federal agencies to follow when determining if a federal civilian position requires a security clearance. The basis for OPM’s statement is Executive Order 12968 (as amended by Executive Order 13467), which gives agency heads the ultimate responsibility to grant or deny security clearances, subject to investigative standards and adjudicative guidelines prescribed by the DNI. In this report, we acknowledge that authority to grant or deny a security clearance resides with agency heads under Executive Order 12968. However, as we also state in our report, Executive Order 13467 provides the DNI the authority to issue guidelines and instructions to the heads of agencies to ensure appropriate uniformity, centralization, efficiency, effectiveness, and timeliness in processes relating to determinations by agencies of eligibility for access to classified information or eligibility to hold a sensitive position. Further, as we state in our report, this Executive Order established a Suitability and Security Clearance Performance Accountability Council to be the government-wide governance structure responsible for driving implementation and overseeing security and suitability reform efforts. This order appointed the DNI as the Security Executive Agent and the Director of OPM as the Suitability Executive Agent, and calls for investigations of suitability and security to be aligned using consistent standards, to the extent practicable. Therefore, we continue to believe that additional guidance from the Security Executive Agent—the DNI—would help align processes across multiple executive branch agencies, and note that ODNI agreed with this assessment. Further, we included OPM in our recommendation as a consulting agency in its role as the Suitability Executive Agent and because, according to OPM, it is the investigative service provider for much of the executive branch. Finally, we recommended that the DNI work with other agencies as necessary in an acknowledgement of the joint nature of reform effort and its oversight structure through the Performance Accountability Council. OPM’s response to this report discussed other points for consideration, which are summarized below. Relationship between the existing position designation tool and security clearances: OPM stated in its comments that one of the premises upon which this report is based is not accurate. Specifically, OPM asserted that we repeatedly posited that agencies must perform the national security designation in order to know whether the occupant will require a security clearance when, in fact, whether the occupant of a particular position will need access to classified information or eligibility for such access (i.e. a security clearance) is one of the factors that help determine whether a position is sensitive. Accordingly, OPM wrote that there is no basis for GAO to conclude that OPM’s position designation tool affects how agencies determine whether the occupant of a position requires access to classified information or eligibility for such access. We state in our report that to assist with position designation, the Director of OPM has developed a process that includes a position designation system and corresponding tool. We continue by stating that the tool does not directly determine whether a position requires a clearance, but rather helps determine the sensitivity of the position, which informs the type of investigation needed. We believe that these statements are consistent with OPM’s explanations and, therefore, do not believe that one of the premises upon which this report is based is inaccurate. However, we have reviewed and made revisions to other statements in our final report to ensure consistency with this point. Additional need for guidance to support the position designation tool: OPM noted that it provided us with copies of audits that OPM had performed on agencies that employ competitive service civilian personnel, where it observed inconsistencies in agency application of the tool. In its comments, OPM cited several reasons why this might happen. We believe this is consistent with our findings that OPM found examples of inconsistency between agency position designation and OPM guidance, and also that officials from executive branch departments expressed varying opinions to us regarding the tool. In response to other discussion in our report about the tool, OPM stated that its proposed revision to part 732 of Title 5 of the Code of Federal Regulations was intended to establish a basis for more detailed guidance. We also note, as previously discussed, that OPM concurred with our recommendation to collaborate with the DNI to revise the tool. In its written comments, DHS noted GAO’s positive acknowledgement of DHS’ efforts to ensure that only those who need a security clearance are authorized one. Although the report does not contain any recommendations specifically directed to DHS, the Department stated that it remains committed to being an active member of the government- wide Suitability and Security Clearance Performance Accountability Council. We are sending copies of this report to the House Committee on Homeland Security. We are also sending copies to the Director of National Intelligence, the Director of the Office of Personnel Management, the Secretary of Homeland Security, the Secretary of Defense, and the Office of Management and Budget. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. This report reviewed government policies and practices for identifying federal civilian positions that require security clearances, and analyzed whether a uniform, consistent, and effective security clearance requirements determination process is in place. Our work focused on the Office of the Director of National Intelligence (ODNI), on the basis of its role to develop personnel security clearance policy and guidance for the federal government. Further, the scope of our work focused more specifically on the security clearance requirements of federal civilian positions from selected components within the Department of Homeland Security (DHS) and the Department of Defense (DOD), because of the volume of clearances that these two agencies process. Within DHS, selected components include the U.S. Coast Guard, U.S. Immigration and Customs Enforcement, and the Transportation Security Administration. Within DOD, selected components include the headquarters level elements of the Departments of the Army, the Navy, the Air Force, and the Washington Headquarters Services. We also included the Office of Personnel Management (OPM) in our review on the basis of its role implementing security clearance reform and as the primary investigative service provider of the federal government. See table 1 for a complete list of the agencies and departments interviewed for our review. To determine the extent to which the executive branch has established policies and procedures for agencies to use when first determining whether federal civilian positions require a security clearance, we interviewed key federal officials from the above mentioned federal agencies and selected components, as well as OPM and ODNI. We reviewed relevant Executive Orders including 10450, 12968, and 13467, Joint Reform Team reports, OPM and ODNI audits, and part 732 of Title 5 of the Code of Federal Regulations. We also reviewed OPM’s proposed revision to the Code of Federal Regulations, which aims to clarify the policy for designating national security positions that was published in the Federal Register in December 2010. We obtained and analyzed personnel security clearance policies within DHS, DOD, and the selected components within these departments to identify the extent to which they have outlined processes for individuals responsible for determining if federal civilian positions require a security clearance. In addition, we obtained and analyzed OPM’s position designation system and tool because agencies we visited use the tool in the position designation process. To determine the extent to which the executive branch has established policies and procedures for agencies to review and revise or validate existing federal civilian position security clearance requirements, we interviewed knowledgeable officials from the federal agencies and selected components in table 1. We reviewed part 732 of Title 5 of the Code of Federal Regulations to identify the extent to which it delineates processes and responsibilities for federal agencies to review and revise or validate whether federal civilian positions require a security clearance. We also analyzed DHS’s and DOD’s personnel security policies, and the applicable policies of selected components within these departments to identify the extent to which each department and selected component has established processes for reviewing, revising, and validating existing federal civilian position security clearance requirements. We conducted this performance audit from July 2011 through July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. The Department of Homeland Security (DHS), the Department of Defense (DOD), and their respective components have developed policies and procedures that relate to position designation. Both DHS and DOD policies provide criteria, in addition to those outlined in the Office of Personnel Management’s (OPM) tool, for position designating officials to use in determining the sensitivity level of the position. Table 2 below provides a descriptive comparison of DHS- and DOD-specific position designation guidance. DHS’s management instruction regarding the personnel security and suitability program (DHS Management Instruction 121-01-007) defines sensitivity levels and instructs the DHS components to follow OPM’s position sensitivity designation guidance when determining the proper sensitivity level for civilian positions. Further, the supervising official with sufficient knowledge of duty assignments is responsible for collaborating with Human Resources and assigning position sensitivity designations and then those designations are subject to final approval by the component’s respective Personnel Security Office. Immigration and Customs Enforcement: In addition to DHS’s management directive, Immigration and Customs Enforcement officials confirmed that they are using OPM’s position sensitivity designation guidance and position designation tool to ensure that their civilian positions have the proper sensitivity level. According to these officials, the Immigration and Customs Enforcement’s Office of Professional Responsibility and Office of Human Capital work with the program offices to establish and validate position security designations. Transportation Security Administration: In addition to DHS’s management directive, the Transportation Security Administration developed informal guidance on the position designation process and uses OPM’s position designation tool to determine the sensitivity level for its positions. The Transportation Security Administration Personnel Security Section requires the manager to confirm that access to classified information is required to perform the duties of the position. In addition, the Transportation Security Administration’s Personnel Security Section does a final review of all position and risk designations. U.S. Coast Guard: According to U.S. Coast Guard officials, the U.S. Coast Guard follows the criteria for position designation laid out in the Commandant of the Coast Guard Instruction 5520.12C, Personnel Security and Suitability Program. In addition, those officials indicated that the U.S. Coast Guard uses OPM’s position designation tool for determining the sensitivity level for civilian positions. As part of a standard hiring practice, supervisors engage Human Resources with a request for personnel action. This initiates the prerecruitment phase of the process where the need of the position is validated, the position description is reviewed and updated, the job analysis is confirmed, and the recruitment strategy is executed. provide the DOD’s personnel security regulation and other guidanceDOD components with detailed criteria to consider when determining whether a position requires access to classified information. Although DOD’s policy is also under revision, the current policy incorporates OPM’s definitions for critical-sensitive and noncritical sensitive positions. Further, DOD’s regulation specifically states that personnel security clearances shall not normally be issued: to persons in nonsensitive positions; to persons whose regular duties do not require authorized access to classified information; for ease of movement of persons within a restricted area whose duties do not require access to classified information; to persons who may only have inadvertent access to sensitive information or areas, such as guards, emergency service personnel, firefighters, doctors, nurses, police, ambulance drivers, or similar personnel; to persons working in shipyards whose duties do not require access to classified information; to persons who can be prevented from accessing classified information by being escorted by cleared personnel; to food service personnel, vendors and similar commercial sales or service personnel whose duties do not require access to classified information; to maintenance or cleaning personnel who may only have inadvertent access to classified information unless such access cannot be reasonably prevented; to persons who perform maintenance on office equipment, computers, typewriters, and similar equipment who can be denied classified access by physical security measures; to perimeter security personnel who have no access to classified information; and to drivers, chauffeurs and food service personnel. In addition, DOD’s Under Secretary of Defense for Personnel and Readiness issued a memorandum requiring the use of OPM’s position designation system and tool to determine the sensitivity level for civilian positions. Further, some of the DOD components that we visited have developed policies that extend beyond the DOD personnel security policy. Army: Army officials affirmed that they use OPM’s position designation tool to determine the sensitivity level of all civilian positions. In addition, Army Regulation 380-67 defines sensitive positions and gives heads of DOD components or their designees authority, subject to certain conditions, to delegate the designation of position sensitivity within their chain of command. Further, a 2006 Army memorandum called for sensitivity reviews of all Army civilian positions every 2 years, at a minimum. Navy: According to officials, the Department of the Navy follows guidance in the Secretary of the Navy Regulation M-5510.30 along with DOD’s personnel security regulation, which requires designators to set the clearance level for civilian personnel according to the risk the position poses. According to a Navy personnel security official, Human Resources offices and local commands have been revalidating positions according to the needs of the command in response to a 2011 memorandum from the Assistant Secretary of the Navy for Manpower and Reserve Affairs. According to Navy officials, Human Resources offices used the position designation tool provided by OPM to determine the sensitivity level for all civilian positions. Air Force: The Air Force uses Air Force Instruction 31-501 coupled with the DOD 5200.2-R to implement its personnel security program. According to the instruction, commanders with position designation authority determine the security sensitivity of civilian positions. Each position is coded with the appropriate security access requirement and identified in the unit manning document and the Defense Civilian Personnel Data System. If the security access requirement code requires a change, the unit commander submits an authorization change request to the servicing security activity. The commander also conducts an annual review of positions to determine the accuracy of position coding and adjust coding if necessary. Air Force officials confirmed that they are using OPM’s Position Designation System and Tool to determine the proper sensitivity level for all civilian positions. Also, according to Air Force officials, in situations where a commander wants to upgrade a particular position, it must be reviewed and approved by a 3-star general. Washington Headquarters Services: Washington Headquarters Services oversees position designation for certain DOD headquarters activities and defense agencies. According to Washington Headquarters Services officials, these agencies and activities follow DOD’s personnel security regulation for position designation and use OPM’s position designation system and tool in accordance with DOD policy. Since 1997, federal agencies have followed a common set of personnel security investigative standards and adjudicative guidelines for determining whether federal workers and others are eligible to receive security clearances. Once an applicant is selected for a position that requires a security clearance, government agencies rely on a multiphased personnel security clearance process that includes the application submission phase, investigation phase, and adjudication phase, among others. Different departments and agencies may have slightly different security clearance processes—the steps outlined below are intended to be illustrative of a typical process. The application submission phase. A security officer from an executive branch agency (1) requests an investigation of an individual requiring a clearance; (2) forwards a personnel security questionnaire (Standard Form 86) using the Office of Personnel Management’s (OPM) e-QIP system or a paper copy of the Standard Form 86 to the individual to complete; (3) reviews the completed questionnaire; and (4) sends the questionnaire and supporting documentation, such as fingerprints and signed waivers, to OPM or the investigation service provider. The investigation phase. Federal investigative standards and OPM’s internal guidance are typically used to conduct and document the investigation of the applicant. The scope of information gathered in an investigation depends on the level of clearance needed and whether the investigation is for an initial clearance or a reinvestigation for a clearance renewal. For example, in an investigation for a top secret clearance, investigators gather additional information through more time-consuming efforts, such as traveling to conduct in-person interviews to corroborate information about an applicant’s employment and education. After the investigation is complete, the resulting investigative report is provided to the agency. The adjudication phase. Adjudicators from an agency use the information from the investigative report to determine whether an applicant is eligible for a security clearance. To make clearance eligibility decisions, the adjudication guidelines specify that adjudicators consider 13 specific areas that elicit information about (1) conduct that could raise security concerns and (2) factors that could allay those security concerns and permit granting a clearance. In addition, once the background investigation and adjudication for a security clearance are complete, the requesting agency determines whether the individual is eligible for access to classified information. However, often the security clearance—either at the secret or top secret level—does not become effective until an individual needs to work with classified information. At that point, the individual would sign a nondisclosure agreement and receive a briefing in order for the clearance to become effective. DOD commonly employs this practice and, in some cases, the individual ultimately never requires access to classified information. Therefore, not all security clearance investigations result in an active security clearance. Finally, once an individual is in a position that requires access to classified national security information, that individual is reinvestigated periodically at intervals that are dependent on the level of security clearance. For example, top secret clearanceholders are reinvestigated every 5 years, and secret clearanceholders are reinvestigated every 10 years. In addition to the contact named above, David Moser (Assistant Director), Sara Cradic, Cynthia Grant, Nicole Harris, Jeffrey Heit, Kimberly Mayo, Richard Powelson, Jason Wildhagen, Michael Willems, and Elizabeth Wood made key contributions to this report. Personnel Security Clearances: Continuing Leadership and Attention Can Enhance Momentum Gained from Reform Effort. GAO-12-815T. Washington, D.C.: June 21, 2012. Background Investigations: Office of Personnel Management Needs to Improve Transparency of Its Pricing and Seek Cost Savings. GAO-12-197. Washington, D.C.: February 28, 2012. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Personnel Security Clearances: Overall Progress Has Been Made to Reform the Governmentwide Security Clearance Process. GAO-11-232T. Washington, D.C.: December 1, 2010. Personnel Security Clearances: Progress Has Been Made to Improve Timeliness but Continued Oversight Is Needed to Sustain Momentum. GAO-11-65. Washington, D.C.: November 19, 2010. DOD Personnel Security Clearance Reform: Preliminary Observations on Timeliness and Quality. GAO-11-185T. Washington, D.C.: November 16, 2010. Privacy: OPM Should Better Monitor Implementation of Privacy-Related Policies and Procedures for Background Investigations. GAO-10-849. Washington, D.C.: September 7, 2010. Personnel Security Clearances: An Outcome-Focused Strategy and Comprehensive Reporting of Timeliness and Quality Would Provide Greater Visibility over the Clearance Process. GAO-10-117T. Washington, D.C.: October 1, 2009. Personnel Security Clearances: Progress Has Been Made to Reduce Delays but Further Actions Are Needed to Enhance Quality and Sustain Reform Efforts. GAO-09-684T. Washington, D.C.: September 15, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 22, 2009. DOD Personnel Clearances: Preliminary Observations about Timeliness and Quality. GAO-09-261R. Washington, D.C.: December 19, 2008. Personnel Security Clearances: Preliminary Observations on Joint Reform Efforts to Improve the Governmentwide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Questions for the Record Regarding Security Clearance Reform. GAO-08-965R. Washington, D.C.: July 14, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD’s Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. DOD Personnel Clearances: Questions for the Record Related to the Quality and Timeliness of Clearances. GAO-08-580R. Washington D.C.: March 25, 2008. DOD Personnel Clearances: DOD Faces Multiple Challenges in Its Efforts to Improve Clearance Processes for Industry Personnel. GAO-08-470T. Washington, D.C.: February 12, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: Questions and Answers for the Record Following the Second in a Series of Hearings on Fixing the Security Clearance Process. GAO-06-693R. Washington, D.C.: June 14, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. Questions for the Record Related to DOD’s Personnel Security Clearance Program and the Government Plan for Improving the Clearance Process. GAO-06-323R. Washington, D.C.: January 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06-233T. Washington, D.C.: November 9, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005.
Security clearances allow personnel access to classified information that, through unauthorized disclosure, can, in some cases, cause exceptionally grave damage to U.S. national security. In 2011, the DNI reported that over 4.8 million federal government and contractor employees held or were eligible for a clearance. To safeguard classified data and manage costs, agencies need an effective process to determine whether civilian positions require a clearance. GAO was asked to examine the extent to which the executive branch has established policies and procedures for agencies to use when (1) first determining if federal civilian positions require a security clearance and (2) reviewing and revising or validating existing federal civilian position security clearance requirements. GAO reviewed executive orders and the Code of Federal Regulations and met with officials from ODNI and OPM because of their Directors’ assigned roles as Security and Suitability executive agents, as well as DHS and DOD based on the volume of clearances they process. The Director of National Intelligence (DNI), as Security Executive Agent, has not provided agencies clearly defined policy and procedures to consistently determine if a position requires a security clearance. Executive Order 13467 assigns DNI responsibility for, among other things, developing uniform and consistent policies to determine eligibility for access to classified information, and gives the DNI authority to issue guidance to agency heads to ensure uniformity in processes relating to those determinations. In the absence of this guidance, agencies are using an Office of Personnel Management (OPM) tool that OPM designed to determine the sensitivity and risk levels of civilian positions which, in turn, inform the type of investigation needed. OPM audits, however, found inconsistency in these position designations, and some agencies described problems in implementing OPM’s tool. In an April 2012 audit, OPM reviewed the sensitivity levels of 39 positions in an agency within the Department of Defense (DOD) and reached different conclusions than the agency for 26 of them. Problems exist, in part, because OPM and the Office of the Director of National Intelligence (ODNI) did not collaborate on the development of the position designation tool, and because their roles for suitability—consideration of character and conduct for federal employment—and security clearance reform are still evolving. Without guidance from the DNI, and without collaboration between the DNI and OPM in future revisions to the tool, executive branch agencies will continue to risk making security clearance determinations that are inconsistent or at improper levels. The DNI also has not established guidance to require agencies to review and revise or validate existing federal civilian position designations. Executive Order 12968 says each agency shall request or grant clearance determinations, subject to certain exceptions, based on a demonstrated need for access, and keep to a minimum the number of employees that it determines are eligible for access to classified information. The order also states that access to classified information shall be terminated when an employee no longer has a need for access, and prohibits agencies from requesting or approving eligibility in excess of actual requirements for access. During this review of Department of Homeland Security (DHS) and DOD components, GAO found that agency officials were aware of the need to keep the number of security clearances to a minimum, but were not always required to conduct periodic reviews and validations of the security clearance needs of existing positions. Overdesignating positions results in significant cost implications, given that the fiscal year 2012 base price for a top secret clearance investigation conducted by OPM is $4,005, while the base price of a secret clearance is $260. Conversely, underdesignating positions could lead to security risks. GAO found that the agencies follow varying practices because the DNI has not established guidance that requires executive branch agencies to review and revise or validate position designations on a recurring basis. Without such a requirement, executive branch agencies may be hiring and budgeting for initial and periodic security clearance investigations using position descriptions and security clearance requirements that no longer reflect national security needs. Further, since reviews are not done consistently, DHS and DOD and other executive branch agencies cannot have assurances that they are keeping the number of positions that require security clearances to a minimum. GAO recommends that the DNI issue clearly defined policy for agencies to follow when determining if federal civilian positions require a security clearance, and that the DNI and Director of OPM collaborate to revise the existing position designation tool. GAO further recommends that the DNI issue guidance to require agencies to periodically review and revise or validate the designation of their existing federal civilian positions. ODNI and OPM concurred, although OPM raised concerns with which GAO disagrees and addresses in the report.
Academic and government research has highlighted the crucial importance of a skilled workforce to the nation’s economy. This research has also suggested that the training being provided to current and future workers may not be sufficient to ensure a workforce with the skills necessary for fostering economic growth and improved living standards.Partly in response to these concerns, over the last several years, the administration, the Congress, and the private sector have suggested or implemented several initiatives to enhance the provision and effectiveness of training. Some of these initiatives call for a greater role for employers in determining training needs or conducting training. Although many federal training programs focus mainly on the needs of particular types of worker populations (such as dislocated workers), other training programs focus on employers’ training needs. For example, in most states employers can take advantage of programs that provide funding for employment training as an integral part of the state’s economic development strategy. As of 1995, 47 states spent over $350 million for programs that helped employers develop customized formal training programs for their workers. The rationale for these state-funded programs is that worker retraining is critical to economic development. Most of these programs are funded from general state revenues; some states, however, require mandatory employer contributions to state training funds (similar to unemployment insurance payroll taxes). To be eligible for these state programs, employers often must meet financial or industry-specific requirements. To participate in these programs and receive funding for training, employers must prepare detailed training plans about the training they want to provide, be responsible for providing the training, and adhere to specific follow-up requirements. Employers may also fund their own training programs, individually, collectively, or in conjunction with unions, to train workers or new hires. Although a 1990 study estimated that employers spend billions each year on training activities, no comprehensive list exists of all privately funded training programs or how employers participate in them. Collaborations among employers and other entities—sometimes called consortia—are often developed mainly to provide member employers with trained workers at reasonable costs. Such consortia have historically existed in industries with technical trade or unionized occupations that have joined together for apprenticeship training. Apprenticeship training is a structured approach to formal training in which employers send their workers to classroom training while providing them with supplemental on-the-job training (OJT). Although not required, employers often register their programs with the U.S. Department of Labor’s Bureau of Apprenticeship and Training (BAT) to take advantage of various benefits. In 1995, Labor had about 34,000 registered apprenticeship programs with over 355,000 apprentices. Small employers have special training needs because of the workers they tend to employ, according to experts. Establishments of fewer than 100 workers employ over 50 percent of all workers, and, according to SBA, small employers provide the first work experience for two out of every three workers. Small employers also often hire workers with fewer skills and less education because they tend to pay lower wages than larger employers in their industries or geographic areas. Small employers are less likely to use available training programs and resources—especially federally sponsored training programs—than are larger employers, according to the extremely limited quantitative data available. For example, a recent employer establishment survey conducted by EQW found that 22 percent of work places with over 100 employees used government-sponsored training programs, while only about 12 percent of work places with fewer than 100 employees used them. A 1993 University of Kentucky survey also found that, while 44 percent of establishments with 500 or more employees used government training programs, only about 20 percent of those with 100 or fewer employees used them. An employer establishment survey conducted by BLS in 1994 found that although over 50 percent of establishments with 250 or more employees used apprenticeship programs, less than 18 percent of establishments with 50 or fewer employees used them. These data are supported by the views of many experts, who believe that the amount and quality of training being provided by small employers may not be sufficient to ensure a workforce with the skills necessary for fostering economic growth. No consensus exists, however, on the type or level of small employer participation appropriate to ensure that present and future workers of small employers are adequately skilled. In addition, several reasons may explain why small employers may not train or participate in available training programs. Finally, use of available training programs may be limited for employers of all sizes. Nonetheless, the significant difference between the use of available programs by small employers compared with larger ones raises questions about why small employers participate less in available training programs. Small employers may face several barriers when participating in available training programs. As shown in table 1, we categorized these barriers as economic, institutional, or informational. Appendixes II, III, and IV provide detailed information on the presence of these barriers in each of the programs we visited. Economic costs may affect small employers’ participation in training programs because small employers typically have fewer financial and human resources than larger employers, making training costs prohibitive. This lack of resources makes it more difficult for small employers to pay tuition costs or allow workers to take training during normal work hours because employers would have to absorb the cost of wages paid to workers who are training rather than working. In addition, because small employers are less likely to have staff devoted to training or personnel matters, it may be relatively more costly and time consuming for them to divert resources from production to completing administrative requirements for program participation. Finally, because turnover rates for small employers tend to be higher than for larger employers in the same industry and geographic area, small employers may be less willing or able to absorb the lost costs and benefits of training if the newly trained employee quits or is hired by another employer. Economic barriers to training were present to some degree in all the programs we visited—federal, state, and private. In the two EDWAA sites we visited, the costs associated with paperwork requirements somewhat discouraged employers from participating. For example, if an employer wants to provide OJT to a dislocated worker, the employer must, among other requirements, complete a detailed OJT training agreement, specifying what training is to be provided, its duration, the number of participants, wage rates to be paid, the rate of reimbursement, and a description of what participants will learn. The employer must also maintain and make time and attendance reports and other records available to program officials to support amounts reimbursed by the government under these contracts during the training period (which can last 6 months or longer). The employer also must allow program officials to inspect the program site to ensure that training is being done accurately. Officials at the program sites we visited said that because of the significant requirements, employers rarely provide OJT. State-funded training programs also had economic barriers that may discourage small employer participation. First, to participate in these programs, employers must comply with extensive paperwork requirements, providing detailed information about their operations, why they need training, and the parameters of the training. Then, they must develop a detailed training plan. Employers told us that, because of the amount of information required and the detailed nature of the information on the application, completing the application process is time consuming and burdensome. Program officials said the application process takes 5 to 6 months or possibly longer; one employer said it took over a year to complete the application process. In addition, to receive all of the requested funding throughout the training period (which can last up to 2 years) employers must meet with program staff and supply updated training information. Finally, when the training is completed, employers must provide information on its impact. The small employers we interviewed who were participating in the program said the monitoring requirements were also burdensome and time consuming. These employers participated despite the application and monitoring requirements because they recouped their costs when they received the training funds. However, they noted that other small employers may not be willing to endure the process to receive the training funds. Although state program officials said these requirements were mandated by the program legislation to ensure program accountability, they agreed that small employers may be discouraged by the amount of paperwork and information required. In addition, a few of the employers participating in one of these programs said that the program’s requirement that employers allow workers to receive training during normal work hours was difficult and sometimes costly. They said that employees did not always want to attend training during normal work hours and maintaining consistent productivity levels while employees were in training was difficult. Finally, one of the apprenticeship programs posed economic barriers to employers. Employers were hesitant to hire and train apprentices from the bricklayer apprenticeship program because they believed it was too costly. Officials overseeing this program said they often had difficulty placing all apprentices because employers said that, although apprentices received reduced wages during the training period, the reduced wages did not offset the apprentices’ lower productivity—a cost they could not afford. A broader economic concern facing employers in this apprenticeship program was the cyclical nature of the construction industry. They said that, even in the best of times, unionized journeymen sometimes cannot work all year because of weather or other factors and hiring apprentices when journeymen lack work would be unfair. One employer also told us it would not be fair to the apprentices to hire them and then lay them off soon after for lack of work. Other barriers limiting small employer participation in training programs are institutional, often resulting from a program’s organization or operations. Institutional barriers may limit small employer participation in programs because they may discourage or disqualify small employers from accessing the programs. For example, if a program targets its services primarily to workers rather than employers, it may discourage small employers because they do not have personnel dedicated to finding out about program procedures, which may be confusing or complicated. In addition, if a program limits its services to certain industries, locations, or training needs, employers of all sizes will not be able to take advantage of the program if they cannot fit into any of the targeted groups. Small employers may find it even more difficult if they do not have industry characteristics or training needs in common with other industry employers. The EDWAA and apprenticeship programs faced some of these institutional barriers. EDWAA’s traditional focus on workers may discourage small employer participation. First, most of the program’s resources are to be directed to workers’ needs rather than employers’ needs, which may mean that programs target large employers because they are better connected to established human resource networks and can hire more workers out of the program than small employers. One small employer told us that he believed training programs tended to focus on larger employers because they are easier to find. In that respect, programs may serve the needs of larger employers; one employer said that, being a small employer, he cannot afford to wait 6 months to a year until the required training is completed before he can get a trained worker (as a larger employer may be able to do). Moreover, the program allows much flexibility in how local sites involve employers, so that disparate views and approaches to employer participation may result. For example, one site we visited believed that OJT was very important and that employers should take advantage of it. The other site did not actively market this part of the program to employers. Circumstances such as this may be especially problematic for small employers, who have less time and fewer resources to navigate among various program structures to determine the participation requirements. The apprenticeship programs also presented some institutional barriers for small employers. Both of the programs we visited provided structured training for specific technical skills needed by similar types of employers in a single geographic area. The training these programs provided was not available to small employers with different training needs or those in a different location. Furthermore, because one of the programs provided training for employers with unionized employees only, employer participation would require an employer to become unionized. This could include signing a collective bargaining agreement and agreeing to other conditions that a small employer may not be willing or able to agree to. Another barrier that may limit small employer participation is informational—that is, small employers may lack appropriate knowledge of training needs or available training programs. Small employers, because they are less likely to dedicate staff to training and personnel matters, may have more difficulty than larger employers identifying training needs or determining what training programs are available to meet their needs. In addition, small employers may be less involved with established training networks or industry groups or trade associations than larger employers, so they often lack adequate and accurate information about what training programs are available to meet their needs. Informational barriers may also limit full participation for the EDWAA and the state training fund programs. For example, many small employers we interviewed that had hired workers through the EDWAA program were generally unaware of the program and did not know that the program would reimburse them for OJT they provided to a dislocated worker. Because small employers were generally unaware of the program, they said that they did not typically seek out these workers for their job vacancies. In addition, many of them had negative opinions about the program, which also influenced their decisions about whether to seek individuals from federal training programs for vacancies. For example, several employers said that they did not believe the quality of participants in federal training programs was very good. These employers, however, were pleased with the skills of the workers they had unknowingly hired through the program. Program officials at the one site we visited (NTPIC in Tennessee) said that current funding restrictions make it difficult to adequately serve individuals and small employers simultaneously. At the other site (TPIC in Oregon), program officials said they continued to believe that they achieved significant employer participation through a number of activities. They also said their encouragement of a “self-directed job search” for dislocated workers means it is quite possible that employers are unknowingly hiring workers who had received services through this program. Moreover, several of the small employers we spoke with who were participating in the state training fund programs said they were unaware of the program until contacted by a consortium. While these employers ultimately were able to participate, they believed that other small employers who were not involved with such groups may not be aware of the programs’ availability. Program officials at the sites we visited generally acknowledged that although small employer participation was important to the program’s success, it was often difficult to achieve. They said that they want to reduce many of the barriers to small employer participation but some are more difficult to address than others. They said that institutional barriers—those inherent in the programs’ organization—may require amendments or other changes to traditional operations to be addressed. Programs that had been developed with a specific employer focus (the state training fund and apprenticeship programs), however, were using or encouraging consortia to reduce many of the economic and informational barriers to participation in these programs. In the state training fund and apprenticeship programs, consortia usually centered around local community colleges, trade associations, unions, or employers working on behalf of others. In addition, while their activities varied, the consortia typically linked small employers with the services offered by the training programs, including helping employers identify training needs and comply with program requirements. According to the small employers we spoke with who participated in these programs through consortia, consortia were a significant determinant in their participation. Appendixes III and IV highlight the consortia activities and how they helped small employers participate in the programs. Although the use of consortia is permitted in EDWAA, we did not find them being used at the sites we visited. Labor officials said, however, that incentives to encourage their development may be insufficient. In 1995, Labor provided about $5 million in demonstration grants to 11 sites to demonstrate the ability of organizations to broker linkages between employers and dislocated workers. Through these demonstrations, these organizations—similar to consortia—are to encourage employers to participate in identifying needed employee skills and developing training curricula designed especially to meet the employers’ needs. Several of the demonstration projects focus on individual clusters of small employers, while others use Employer Advisory Committees so that the workforce needs of employers are the major focus of training provided to dislocated workers. The use of consortia in the apprenticeship and state training fund programs reduced many of the economic barriers to participation. In the apprenticeship program, employers said the use of consortia reduced training costs. According to one employer participating in the apprenticeship program, because the costs of the program were subsidized by all consortia members, any costs he must individually bear for training apprentices were lower than they would have been if he had tried to get similar services at local educational institutions. Other employers said that because the training was cheaper because of this subsidization, they were less concerned about the losses associated with trained workers quitting or being hired by another employer. Consortia also helped to reduce economic barriers in the state training fund programs, primarily because they either completed the administrative and monitoring paperwork for the employers or they provided significant help to the employers to complete it. Representatives of several of the major consortia in these programs that we interviewed said their key role was providing help with the paperwork and other program requirements. Employers participating through consortia said that the use of consortia saved them much time and effort in completing the paperwork and other program requirements. One employer participating through a consortium noted that the value of the consortium’s services was worth far more than the fees charged. Consortia also played a major role in the apprenticeship and state-funded training programs in addressing informational barriers to small employers’ participation. In the apprenticeship programs, the consortia developed a structured training curriculum that could be accessed by all employer members of the consortia. Employers said this significantly reduced the difficulty of identifying training needs. One employer said that he did not have time to identify his training needs, contact local schools to determine who had the appropriate training courses, and put together a curriculum. In that respect, he said, the consortium “put it all together.” In addition, all employers we spoke with believed that, because the training was developed with their input, it was industry based and of a better quality than what was available from local institutions. Finally, because the consortia carried out several activities that promoted the industries and the availability of the training, employers participating in both apprenticeship programs believed the consortia made other small employers aware of the training. One employer who was not participating in one of the programs said he was well aware of the program. He believed it was effective and would use it if he had a need for apprentices. The consortia also helped to overcome informational barriers to the state training fund programs. Often, the consortia contacted employers and asked them about their training needs, then worked with them to determine the best way to address those needs. One of the small employers we spoke with said he would not be participating unless one of the consortia had contacted him because he did not know about the program before that. Moreover, another said that, without one of the consortia, he may not have been able to develop an effective training plan because doing so would have been very costly and time consuming. The availability of technical assistance at both of the state training fund programs we studied also helped to relieve economic barriers. Because every employer may not want to become involved with a consortium or none may be available, the state training fund programs also provided significant technical assistance to individual small employers interested in the program. Both programs had staff or used staff from other organizations to visit employers and walk them through the application requirements. When necessary, program staff also help small employers identify training needs to facilitate the application process. This helps employers who want to participate but may find it too costly or time consuming to provide the necessary information on their own. Program officials said that technical assistance is provided throughout the training period, which can last up to 2 years. During this time, program staff periodically visit the employer and assess the employer’s progress in the training provision. This assessment includes a review of the training curricula, interviews with instructors and students, and a determination that the training is providing necessary skills. Employers told us that this assistance was critical because it helped to reduce the time needed to complete the paperwork and comply with other administrative requirements. At the EDWAA sites we visited, the use of technical assistance to help employers reduce participation costs was limited because employers are not typically involved in determining workers’ training needs. Although local program officials said they offer technical assistance as needed when an employer chooses to provide training under the OJT part of the program, this does not happen much because employers often do not take advantage of this part of the program. Small employers are much less likely to participate in training programs than are larger employers. This appears to stem from economic, institutional, or informational barriers that small employers face. The state training fund and apprenticeship programs help small employers overcome these barriers by using or promoting consortia. The state training fund programs also emphasize technical assistance to help individual small employers that were not involved with consortia reduce economic barriers to participation. Although permitted in EDWAA, consortia were not used at the sites we visited. We believe, however, that consortia may be useful in this program as well and that Labor’s demonstration grants may provide more information about the value of consortia. As reform of the federal employment training system continues to be debated, our work suggests that a greater role for or use of consortia or increased technical assistance provided to employers could make federal training programs more accessible to small employers. In commenting on a draft of this report (see app. V), the Department of Labor agreed that federal training programs need to work with both large and small employers. Because Labor’s focus continues to be, however, on assisting workers, Labor officials said that they are not in a position to evaluate the economic costs and benefits of special efforts on behalf of private-sector employers. Regarding consortia, Labor officials said that our analysis does not demonstrate that consortia would benefit the EDWAA program. We acknowledge that we did not observe consortia being used in the EDWAA program. We believe, however, that consortia might prove useful in such programs. Labor’s demonstration grants promoting better employer-dislocated worker links should be informative on this point. Labor officials also suggested that we note early in the report that our work was based on a sample of programs that was not statistically representative of the universe of EDWAA sites or other employment training programs. In response to Labor’s concern, we noted this point on page 2 of our report (in addition to app. I). We acknowledge that our findings are from a small number of case studies and, therefore, are not generalizable to all employers. Despite the limitations of case studies, we believe that the detailed, qualitative data collected from many interviews provides useful information on small employers’ experience as participants in selected training programs. Finally, Labor officials responded to the concerns raised about the paperwork requirements associated with the OJT part of the EDWAA program. They noted that some of the requirements that employers complained about were added in the recent JTPA amendments to improve accountability. Officials said they continue to believe that OJT is an effective training approach that can provide training opportunities that would not otherwise be available. We are sending copies of this report to the Secretary of Labor and other interested parties. GAO contacts and staff acknowledgments appear in appendix VI. If you have any questions about this report, please contact me on (202) 512-7014. To provide information on the extent of small employers’ participation in employment training programs and identify barriers to this participation, we reviewed available literature, including studies and surveys performed by public- and private-sector experts. For example, we obtained and reviewed the most recent employer surveys of training, including (1) a 1995 employer establishment survey developed by the National Center for the Educational Quality of the Workforce (EQW) (located at the University of Pennsylvania) in conjunction with the Bureau of the Census, (2) a 1994 employer establishment survey by BLS on training, and (3) a 1993 employer survey by the University of Kentucky performed through a grant from SBA. It was beyond our scope to determine the benefits employers derive from training or reasons for employers to train. We also interviewed officials from employer and other associations, such as the American Society for Training and Development; the Society for Human Resources Management; the National Alliance of Business; the United States Chamber of Commerce; and the National Federation of Independent Business about training for small employers. We interviewed cognizant officials from two of the major federal agencies responsible for training and small business operations (the Department of Labor and SBA) to obtain information on their efforts to help small employers with training. Finally, we reviewed alternative types of training activities undertaken by various organizations to learn how these organizations are trying to overcome barriers to training, including those for small employers. This review of the literature, available studies, and surveys highlighted several major definitional issues, discussed later in this appendix. To identify ways to help address these barriers, we conducted six case studies of experiences of small employers in several different types of worker training programs. The case studies focused on (1) program goals and operations, (2) characteristics of the specific barriers faced by small employers for participation, and (3) whether any methods had been developed to address the barriers and foster small employer participation. We included training programs for existing workers as well as federal training programs, which are predominantly targeted to unemployed or underemployed workers in our scope. Our case studies are not generalizable to all employers. We performed our work in accordance with generally accepted government auditing standards from June 1995 to March 1996. Because no comprehensive data exist on the number and characteristics of U.S. training programs, accounting for every training program that exists is impossible. However, the literature revealed three broad contexts of training programs by funding source: federally funded, state-funded, and privately funded (by an employer, a group of employers, or an employer in collaboration with a union). Typically, federally funded programs target specific populations of unemployed or underemployed workers; state- and privately funded programs target employers and their existing workforce (and in some cases, new hires). We narrowed down the selection of programs and sites within each of these different types and selected six sites, which are listed in table I.1. Federally funded (JTPA title III) We reported in 1995 that about 163 federal employment training programs are operated by 15 different agencies. The largest single federally funded training program was operated by Labor under JTPA. At a cost of almost $5 billion in fiscal year 1995, this program targeted specific groups of unemployed workers for training and other assistance they needed to obtain stable employment. About $1.2 billion of this funding was targeted to dislocated workers—those who become unemployed due to plant closings or permanent layoffs. This is title III of JTPA or the Economic Dislocation and Worker Adjustment Assistance (EDWAA) program. These workers, although unemployed, have many characteristics similar to employed workers, since they generally have significant attachment to the labor force and in some cases long job tenure. Because of this population’s similarities with the populations served in the state- and privately funded programs, we decided to use this program for the case study site visits. To select two sites from the over 600 that carry out EDWAA services on the local level, we obtained from Labor a list of the 25 EDWAA program sites that were Enterprise Council members, according to scores for continuous improvement. We reviewed those programs and excluded those that (1) terminated fewer than 100 individuals, since these programs may be too small for analysis, and (2) were too difficult geographically for us to access. We also considered whether the sites performed all of their services in house or contracted them out. This narrowed the selection to five sites; we then contacted officials at the sites and obtained general information about the programs and their views on the importance of small employer participation. On the basis of their willingness to participate and their interest in obtaining small employer participation, we selected the North Tennessee Private Industry Council (NTPIC) in Clarksville, Tennessee, and The Private Industry Council (TPIC) in Portland, Oregon. Virtually every state has a program or a set of programs for economic development. Many states include employment training assistance as a primary part of their economic development strategy. In 1995, at least seven states funded their programs through mandatory levies on employers similar to employer payroll unemployment insurance taxes. These states were Alaska, California, Delaware, Hawaii, New Jersey, Rhode Island, and Texas. We obtained general information about these programs and their efforts to serve employers of all sizes. California operates the oldest and largest of these programs (which was developed in 1982 with a fiscal year 1995 budget of $76 million). New Jersey’s program is fairly new (created in 1992) and has a significantly smaller budget (about $20 million in fiscal year 1995). We selected California’s Employment Training Panel (ETP) and New Jersey’s Office of Customized Training (OCT). Because no central information source exists for all privately funded training programs, we used as a proxy those employer apprenticeship programs registered with the Department of Labor’s Bureau of Apprenticeship and Training (BAT). Although an employer is not required to register an apprenticeship program with BAT, in 1995, 355,000 apprentices were in training in 34,000 registered apprenticeship programs. To narrow down the selection of programs, we asked BAT officials to eliminate from that list any inactive, military, and single-employer programs, and programs in states where we had already selected federally funded and state-funded programs. We asked BAT to focus on the apprenticeship programs for those occupations that were among the top 25 apprenticeable occupations in 1995 (accounting for two-thirds of all registered apprentices). We also asked that the list identify union and nonunion programs. We reviewed the resulting list and contacted a judgmental selection of programs to obtain program operation information, such as industry and geographic locations, number of apprentices, and the extent of small employer participation. Using these criteria, we selected a nonunion program operated by the Tooling and Manufacturing Association (TMA) in Park Ridge, Illinois, and a union program operated by the Employing Bricklayers Association of Delaware Valley (EBA) in Plymouth Meeting, Pennsylvania. At each of these six sites, we obtained historical program funding and operation information and requested information from local program officials on the characteristics of employers and workers served and available outcome measures. During the visits to these six sites, we interviewed program administrators, those responsible for funding and overseeing the program; a sample of the service providers, those who actually provided the training; and a sample of small employers that used the program in 1995. We also requested information and referrals for employers that did not participate in the program; only TMA provided that information. For the EDWAA and apprenticeship programs, we also obtained program information from Labor officials in Washington, D.C. Although our main objective was to obtain information on the extent of small employer participation in training programs, a related issue is the amount of training small employers provide. To obtain information on this, we reviewed many studies and interviewed experts and employers about their training activities. We found a consensus that small employers train with less frequency than larger employers and that the training provided by small employers is often not adequate for their needs. In addition, when small employers did train, they often obtained training as a result of an outside source, such as a vendor or consultant, or possibly through an apprenticeship program. However, detailed analysis of these results is complicated because of several methodological differences. For example, while most of the experts and literature agreed that small employers had a more difficult time training than did large employers, no consensus existed on what criteria defined a small employer. SBA reported that depending upon the industry or the issue being studied, definitions of small employer vary from those having fewer than 100 employees to those having as many as 500. The many studies we reviewed and our interviews with experts confirmed this. We decided to use employers with 100 or fewer employees for our definition of small employer for several reasons. First, much of the literature used 100 employees as a cutoff point for small employers. Second, experts noted that if an employer has fewer than 100 employees, it is less likely to have a separate human resources function and would therefore have more difficulties with training. Third, local program officials often considered small employers those with 100 or fewer employees. Although the lack of consensus on the definition of small employer makes it difficult to establish the size of a small employer, the lack of a precise definition is secondary to the broad agreement that employers of fewer than 100 workers have particular problems and needs regarding training. Another term that lacked a standardized definition was training. Again, although experts generally agreed that training occurred more frequently in larger work places than in small, studies and experts referenced different kinds of training. Descriptions of training included formal (such as classroom training provided by an educational institution), informal (supervisory instruction at the work site), structured (classroom training at the work site), unstructured (ad hoc training for immediate needs), and OJT (training during the workday), which could be formal or informal, structured or unstructured. In many cases, the type of training was not defined in the particular sources, or, if defined, pertained only to that particular source. Furthermore, other sources defined training as related to specific goals, such as total quality management, workplace education, or safety and health regulations. The three employer surveys illustrate many of these definitional differences. Each survey asked employers about training and generally found that larger work places provide more training than smaller work places. Differences in survey sampling frames and methodology, format, and scope, however, make a detailed comparative analysis difficult. Regarding sampling frame and methodology, all three of the recent employer surveys were a sample of business establishments, which BLS defines as economic units that produce goods or services and are engaged predominantly in one type of economic activity. The use of establishments could be problematic because a small establishment could actually be part of a large employer or a fairly large establishment could actually be part of a relatively small employer. Finally, an establishment and an employer could be one and the same. Moreover, the universes from which the samples of establishments were drawn differed, as did the sampling methodology. The sample for the EQW survey, conducted in 1994 and released in 1995, was drawn from Census’ business establishment listing and included over 4,600 establishments with at least 20 employees. The survey omitted establishments with fewer than 20 employees and oversampled establishments in the manufacturing sector and those with 100 or more employees. The sample for the BLS establishment survey, conducted in 1994, released in 1994 but covering 1993, was drawn from BLS’ business establishment list; it included almost 12,000 nonagricultural establishments. BLS included in its sample any establishment with at least one employee. Finally, the sample for the University of Kentucky study, performed under a grant from SBA in 1992 and released in 1993, was drawn from a database of about 9 million establishments held by a private firm in Fairfield, Connecticut. The sample included about 3,600 establishments, with an oversampling of larger employers. Only about one-third, or 1,300 establishments, provided complete responses to the survey. The surveys also had significant differences in their scope and format. The EQW survey was a phone survey actually conducted by the Census Bureau. It asked respondents about formal and informal training and precise types of training (such as literacy, basic education, or executive development training). It also asked about the use of outside sources of training, whether training has increased or decreased in the past several years, and the reasons for any such increase or decrease. BLS’ survey was a mail survey sponsored by Labor’s Employment and Training Administration. This survey asked questions primarily on formal training, which BLS defined as training with a structured format and a defined curriculum, including OJT if it met this definition. The questions focused primarily on the incidence and frequency of six specific types of formal training: (1) orientation training, (2) safety and health training, (3) apprenticeship training, (4) basic skills training, (5) workplace-related training, and (6) job skills training. The questions generally did not cover the use of outside sources for training. The University of Kentucky study was also a telephone survey conducted by the University’s Survey Research Center. The purpose of the survey was to obtain information on training experiences of workers hired in the previous 3 months only. The questions focused on five categories of training activities: (1) on-site formal training, (2) off-site formal training, (3) informational management training, (4) informal coworker training, and (5) watching others perform. The federal government provided about $20 billion for 163 programs that involved some aspect of training in fiscal year 1995. The largest of these programs, operated by the Department of Labor, was funded under JTPA. JTPA targets different types of populations of unemployed or underemployed workers, such as dislocated workers, who need assistance to return to the workforce. Although the JTPA program primarily targets workers, Labor officials said active employer involvement and participation in the programs are critical to their success. This appendix describes how two local organizations provide services to dislocated workers, their efforts to involve employers in the program, and how they measure program effectiveness. Through JTPA, the Congress provides funding to assist workers who need help in finding stable employment. Title III of JTPA was designed to address the employment and training needs of dislocated workers—those workers who have permanently lost or would lose their jobs due to plant closures or layoffs. In 1988, the Congress enacted the Economic Dislocation and Worker Adjustment Assistance Act (EDWAA), which restructured the original title III to improve the quality and efficiency of the services provided to dislocated workers. The EDWAA program provides dislocated workers a variety of services based on their particular needs, including retraining, support services (such as paying for day care services or transportation so individuals can attend training), or readjustment (job placement services). The goal of the program is to help eligible workers become reemployed as quickly as possible by considering their individual needs and circumstances. As such, in some instances, only readjustment assistance is required; in others, retraining is also required. States receive EDWAA funding from Labor according to a funding formula based on local unemployment trends and other factors. Governors then distribute the state funding according to several factors included in the legislation to local organizations in the state’s substate areas that have been authorized to provide EDWAA services (called substate grantees). The governors can also use other factors in addition to those specified in legislation when allocating funds to local areas. In fiscal year 1995, the Congress appropriated about $1.2 billion for dislocated workers under EDWAA. JTPA established private industry councils (PIC) to oversee JTPA activities in substate areas, though a great deal of flexibility exists in how services can be carried out at the local level. Of the 616 substate grantees, over half are state or local government agencies, about one-fourth are incorporated PICs, and the remaining grantees are other organizations such as educational institutions or nonprofit associations. The program allows grantees to provide services in house, contract them out to a third party, or do a combination of both. In addition, the grantees have the authority to determine whether they will provide certain services. Regarding funding, however, the program does restrict local organizations’ allocation of funds. According to the law, grantees must spend at least 50 percent of the funding they receive on retraining, although substate grantees can request a waiver from the governor to reduce the funding directed to retraining. No more than 25 percent can be spent on needs-related payments and other support services, and no more than 15 percent can be spent on program administration. Depending on the actual allocation decisions, the remaining funds are then spent on basic readjustment services (which include job placement activities). With the beginning of the program year that started on July 1, 1995, states can now completely waive the retraining and support services limitations in special situations. In program year 1993 (the most recent year for which data are available), about 300,000 workers participated in the EDWAA program. About 164,000 terminated from the program (left the program for any reason), and about 112,000 obtained employment. This resulted in an entered employment rate (the percentage of individuals placed in jobs from the number who terminated from the program) of 68 percent. About three-quarters of those terminated were white and between 30 and 54 years old. About half were men, and most had at least a high school diploma or its equivalent. Almost 60 percent of those terminated received retraining before leaving the program. Most of this training was occupational skills training done in a classroom; very little was through OJT. According to federal and local program officials, employers are generally not required to provide paperwork or meet other requirements to hire a dislocated worker who has gone through the program. Local grantees may establish criteria for employers to meet, such as certain wage levels or minimum levels of benefits. The only mandated exception would be if an employer wanted to take advantage of the OJT part of the program. In these cases, employers must comply with detailed requirements and procedures to be reimbursed for part of the costs incurred by the employer during the training period. For the case study sites, we visited two grantees that had been selected by a group formed of federal and state governments and local entities as providing superior services to dislocated workers. The grantees selected were the North Tennessee Private Industry Council (NTPIC) in Clarksville, Tennessee, and The Private Industry Council (TPIC) in Portland, Oregon. (App. I contains detailed information on how these two sites were selected.) NTPIC is one of 14 substate grantees in Tennessee. Incorporated as a nonprofit, private organization in 1992, NTPIC has a staff of 55 responsible for providing services for EDWAA, other JTPA, and selected federal programs for 11 counties in the state. Because of the geographic distance covered, NTPIC staff work out of 11 centers, one located in each of the 11 counties. Each center has at least one caseworker; a center may have several caseworkers depending on the size of the county and the number of participants. NTPIC provides the majority of the EDWAA services and uses local educational institutions to provide classroom training. NTPIC officials said they believed that providing most of the services in house was most efficient because it allowed them a greater sense of program ownership and responsibility for the participants and to stay abreast of local labor market needs. According to Tennessee state Labor and NTPIC officials, the Tennessee economy has been fairly healthy for the last several years, with unemployment about 5 percent as of September 1995. It has pockets of high unemployment, but officials noted that Nashville and Memphis are especially strong in the services industry; other parts of the state are strong in manufacturing. Although some major employers have shut down operations or left the state, Tennessee’s manufacturing sector is now expanding with the building of several automobile plants. In program year 1994, NTPIC received about $946,000 for the EDWAA program. It served about 388 participants—primarily white females, aged 21 to 39, who had some high school education. Program officials said that this was because Tennessee had a number of “cut and sew operations”that closed. During the year, NTPIC terminated about 147 participants, placing 135 for an entered employment rate of about 92 percent. TPIC is one of six substate grantees in Oregon. Formed in 1987 from two other incorporated PICs, TPIC is a private, nonprofit organization with a staff of 80 to administer EDWAA and other JTPA services for two counties in the state. TPIC contracts out most of the participant-related services to a local community college, which oversees and is part of the Dislocated Worker Project (DWP). DWP is a partnership of five community agencies (two community colleges, the Urban League, Labor’s Community Services Agency, and the local employment service) and TPIC. TPIC administers the DWP contract, monitoring the funds and collecting and reporting program data. TPIC officials said that they believe contracting out these services is efficient because it allows for greater leveraging of resources and economies of scale for training costs. It also allows TPIC to take care of the administrative requirements associated with the program and let the community agencies concentrate on providing the services. Like Tennessee, Oregon also has had a fairly low unemployment rate for the last several years—about 3.4 percent as of November 1995—with some pockets of high unemployment. Officials noted that Oregon is now starting to experience growth in the health care and computer-related industries. In program year 1994, TPIC provided about $1.7 million in EDWAA funding for the DWP contract. Through this contract, TPIC served about 724 participants, who were fairly equally white males and females, aged 22 to 54, with a high school diploma or equivalent. TPIC officials said this population was dislocated from a number of industries. During the year, NTPIC terminated about 510 participants and placed 388 for an entered employment rate of 76 percent. TPIC officials said that the composition of dislocated workers is now changing. Not only has TPIC recently started to focus on the long-term unemployed, but it is also finding that workers with low skill levels and language barriers are making up a greater portion of the dislocated population. Federal Labor and program officials at the two program sites said that dislocated workers generally follow similar procedures to receive services, although the particular procedures and services offered vary by location. As previously discussed, however, local organizations providing services under EDWAA are generally mandated to devote most of their funding and available resources to worker retraining and support services payments. Readjustment, which includes job placement and other activities involving employers, often receives relatively less funding. Officials at both program sites said they would like greater flexibility in spending so they could appropriately serve each participant rather than be limited by spending restrictions. A worker may access EDWAA in several ways, either through an employer, a local Unemployment Insurance (UI) office (if the person is currently receiving UI), or other sources. TPIC officials said that since they are trying to reach the long-term unemployed, the DWP partners are also supposed to obtain names of individuals on UI or other public assistance rolls and contact them to determine their eligibility for EDWAA. In addition, TPIC officials said their program has established relationships with local community agencies so that these organizations can refer potentially eligible individuals to TPIC. To be eligible for EDWAA, a worker must have received a termination notice, currently receive UI, or have exhausted his or her UI benefits. At both of the sites that we visited, PIC staff make eligibility certifications. At NTPIC, the staff use an automated certification process, which determines immediate eligibility; this is available to workers at any of NTPIC’s 11 county centers. At TPIC, staff can immediately certify someone’s eligibility at one of DWP’s three centers. Once certified as eligible, program staff test a worker’s skills and interests, and a caseworker determines what services are needed to find the worker employment. At both of the program sites we visited, workers are tested for math, basic language skills, and career preferences. At NTPIC, these services are provided by caseworkers at 3 of the 11 centers. These tests take about 4 hours and are part of a 2-week transition class. EDWAA participants are not required, however, to participate in the entire class. At TPIC, assessment and related services are provided by the DWP partners at any of the three centers as a part of an 8-day transition class. TPIC officials said the class provides stress reduction techniques and other assistance to the dislocated workers; however, officials are considering shortening the length of the class. On the basis of these test results, the participant, in consultation with a caseworker, decides upon a strategy for getting work. At NTPIC, program participants can work with any of 20 caseworkers located among the 11 centers. The participant and the caseworker develop an Individual Service Strategy (ISS), which lists the participant’s current and desired skills and states how those skills will be developed (for example, whether significant retraining is required). Both the participant and the caseworker sign the ISS; according to NTPIC officials, this helps the participant take the program and training seriously. TPIC’s DWP caseworkers work out of the three centers discussed above. The participant and the DWP caseworker develop a training action plan, which is similar to the ISS. TPIC also requires that, before making a decision about skills to be obtained, the participant conduct informational interviews with three employers to determine whether the skills to be obtained will make him or her more employable. If the participant requires classroom training to find employment, he or she will consult with the caseworker on training options. At both of these sites, most classroom training is provided by local educational institutions such as community colleges or vocational schools. These educational institutions are supposed to offer training for high-demand occupations, the determination of which should be based on analysis of available labor market information and, in some cases, business advisory committees. At NTPIC, a participant may attend any 1 of 18 local educational institutions. At TPIC, the participant may attend any institution, as long as the institution meets TPIC’s approval and it offers training in high-demand occupations. Program officials at both sites said that, although the caseworker provides input into this decision, which school to attend is ultimately the participant’s decision. If necessary, NTPIC staff teach classes for achieving a high school diploma, basic skills, or office skills training. NTPIC officials said that they had to offer this training because it was either not available or affordable for their participants from other local institutions. In cases where training may not be available immediately, NTPIC also tries to enroll participants in “pretraining” classes for computer or office skills. Often these classes are taught at NTPIC’s main office by NTPIC staff who are pulled off line to teach them. NTPIC program officials said that this is very important to do so that participants do not drop out of the program before training begins. NTPIC officials also noted that, because of the geographic area covered by NTPIC, it operates a van to transport participants to and from the assessment and, if possible, training. In addition, TPIC’s DWP partners offer basic computer classes to provide immediate access to training. Throughout the classroom training, caseworkers monitor the participants’ progress through grade or other reports submitted by the educational institutions. The service providers we met with did not believe that these monitoring requirements were onerous. The EDWAA program allows employers to provide OJT to a dislocated worker instead of or in addition to classroom training. If OJT is determined to be appropriate for the participant in his ISS or similar assessment, the caseworkers may contact an employer and ask that employer to hire the participant and provide training. In this case, the employer could be reimbursed for up to 50 percent of the participant’s wages during the training period (which should amount to 499 hours or 6 months). To participate, the employer must complete an OJT training agreement, which specifies, among other things, what training is to be provided, the duration of the training, the number of participants to be trained, wage rates to be paid, the rate of reimbursement, and a description of what the participant will learn. The EDWAA regulations stipulate that the employer must maintain and make available time and attendance reports and other records to support amounts reimbursed under these contracts. If an employer wanted to hire a specific individual and train him through OJT, the employer would have to contact the site providing EDWAA services and request this option; in that case, the site would conduct an assessment to determine whether OJT was appropriate. The employer could not hire the individual until this assessment was completed and it was determined that OJT was appropriate. Federal Labor officials said it is up to the local site to determine whether OJT is beneficial for the program participants. They said OJT provision depends upon the local labor market and whether OJT is the best way to get dislocated workers back to work. NTPIC officials said they believe OJT is one of the best ways to do training, since the employer is training the employee as needed. Last year, however, NTPIC only had two OJT contracts under EDWAA; officials said the significant amount of information the employers must provide and the detailed procedures they must follow discourage employers from participating in this aspect of EDWAA. On the other hand, TPIC officials believed that OJT was not necessary for dislocated workers. Although TPIC allows OJT under EDWAA, TPIC staff do not actively market it to employers, and no EDWAA OJT contracts were let last year. According to documentation provided by NTPIC officials, to participate in OJT, employers must provide significant amounts of information about their operations, such as the services they provide, the number of employees on board, their layoff status, their workers’ compensation insurance policy information, any previous experience with OJT under this program, a written job description of the position for which the training will be provided, and the grievance procedure available to workers. The employer also must be willing to allow a tour by NTPIC officials to assess the training site. The employer must also meet other criteria to be allowed to provide the training, such as a minimum wage for the trainee (between $5 and $7 an hour). Federal Labor officials said that accountability is very important in this aspect of EDWAA because traditionally employers have misused programs that provided them funding to train specific populations of workers. They said that states and local programs, however, often develop more requirements for OJT than are federally mandated. We spoke with several employers who had hired EDWAA participants in the last year. None of them were aware that they could be eligible for wage reimbursement under the OJT part of the program. Before completion of the training (and after assessment if retraining was not required), caseworkers start to work with the participants to prepare them for reemployment. The actual activities the caseworker conducts vary by participant and location but may include assistance with resume writing and interviewing, providing participants with job leads, or setting up interviews for them. NTPIC and TPIC officials said that, typically, dislocated workers are very assertive in looking for jobs; as a result, they may need less job placement assistance than other unemployed individuals. The participants we spoke with who had received training through these programs agreed; they said that they had found their current jobs through classified ads and not through direct referrals from caseworkers. Once a participant has been placed, the NTPIC and DWP caseworkers continuously monitor the participant for up to 6 months after termination. Thirteen weeks after a participant has been hired, TPIC staff send the participant a survey to obtain job and wage information and ask about the quality of services provided. At NTPIC, the caseworkers contact the individual and request similar information. In Tennessee’s program, the University of Memphis follows up with the participant after 26 weeks. TPIC officials said they are considering instituting a 1-year follow-up because a 13-week follow-up is often not a good indicator of long-term employment. Normally, the staff do not contact the employer who hired the participant. The small employers that we interviewed who had hired EDWAA participants said no one had contacted them about EDWAA participants. Federal and state Labor officials as well as local program officials we interviewed said a strong relationship with employers was critical to the program’s success. In addition, the establishment of PICs to provide policy direction and oversee the program on the local level and the requirement that business representation make up most of the PIC clearly point to the importance of the employer for a successful program. Other than PIC representation, however, the law says nothing about how the local organizations should work with employers in this program and places no emphasis on small employers. Moreover, the EDWAA regulations mandate that most of the resources be spent on workers’ rather than employers’ needs. This along with the significant flexibility allowed at the local level make it difficult to determine whether the local efforts to involve employers are effectively reaching small employers. Data are not routinely collected at the national or local level on the characteristics of the employers served through EDWAA. For the sites we visited, we requested a review of program records for program year 1994 to provide information on the employers who had hired EDWAA participants. According to officials at both sites, they do not specifically target small employers because most of the employers in these geographic locations are small. Employer data provided by NTPIC indicated that about 44 percent of EDWAA placements in program year 1994 were to employers with fewer than 100 employees. Program staff at TPIC could not provide comparable data but believed that most of TPIC’s placements were to small employers. Officials at both of the local program sites emphasized the activities they undertook to inform employers about EDWAA services, such as meeting with local Chambers of Commerce or other business groups. At NTPIC, the director noted that the decentralized organization of the caseworkers allows it to maintain close relationships with local businesses. In addition, NTPIC tries to hire local individuals to staff the county centers, believing that these individuals better understand local labor market needs. NTPIC also offers various services, such as stress management and drug testing, for a fee to employers. These services provide funding for additional services (such as the van), and they also help NTPIC inform employers of its job placement services. At TPIC, officials stressed the employer involvement they achieved through the use of the DWP partners. For example, the Urban League has strong ties to local employers, and the use of the Urban League for EDWAA services takes advantage of that relationship. Officials also said that they have a Dislocated Worker Committee consisting of business representatives; this committee monitors the program and ensures that it meets the needs of local employers. TPIC also encourages its DWP partners to establish Business Advisory Committees, which review the training curriculum and help ensure that the appropriate skills are provided. The program sites also carry out individual job placement activities, which vary depending upon the needs of the participants and the views of the local organization. At both sites we visited, caseworkers said that they review newspaper ads for possible job leads for participants. Sometimes they call the employers to see what kind of skills the employer is looking for. Or, employers may call the caseworkers to request possible candidates. At NTPIC, caseworkers maintain requests from local employers for job applicants. One caseworker said NTPIC wanted employers to view it the same as any other employment agency. At TPIC, the DWP partners are required to hold job fairs and contact employers, but the participants are expected to do the majority of the job search. Program officials said they emphasize a “self-directed job search” approach that focuses on providing participants the tools they need to find jobs on their own rather than find them jobs. According to the TPIC director, this is because the goal of the program is to help these individuals obtain self-sufficiency and skills for gaining long-term employment. TPIC officials believed such an approach was the most efficient for most dislocated workers. Despite the sites’ efforts to involve employers in the program, the small employers we interviewed who had hired EDWAA participants lacked knowledge about this program or had negative opinions about federal training programs in general that appeared to limit program participation. Most of the employers we spoke with were generally not aware of the EDWAA program and, as a result, did not actively seek EDWAA participants for job openings. In one case, the employer knew about EDWAA because the employer provides classroom training at its location in which EDWAA participants occasionally take part. This employer said that the EDWAA participant hired was in one of these classes; the employer does not routinely contact the EDWAA program for job candidates. In several other instances, the employers knew that the workers they had hired had received training at a local educational institution, but they did not know the federal government provided funding for the training. As a result, they also did not actively seek EDWAA participants through these programs. Most of the employers we spoke with, however, knew nothing about the program before hiring a participant and in most cases not even after they had hired a participant. Generally, the employers said they had hired the participants from the individuals’ response to a newspaper ad. In a few cases, the employers did find out about the program after hiring the participant. One employer said he wished he knew of more qualified candidates from this program because he would hire them. He had not contacted program staff, however, nor had they contacted him about additional participants. The other employers, however, did not know that the person they had hired had received training through this program. In fact, one employer who had been trying to work with the local community college to set up a training program for new workers said he wished he knew more about the program. Many of these employers had strong negative opinions about the quality of federal training programs. These opinions prevailed even though all of the employers said they were very pleased with the quality of the EDWAA participants they had hired. One employer noted that the training provided in federal programs takes too long. His staffing needs are much more immediate, and he cannot wait 6 months to a year for a trained worker. Another employer said he thought larger employers, who do not need as much help with training, seem to get all the attention. In addition, other employers said they believed that participation in these programs required applications of various kinds or other information that they did not want to provide. Finally, employers questioned the effectiveness of the training provided in these programs and the adequacy of program participants’ skills. NTPIC officials said they believed they should do whatever they can to inform employers of the program. However, the current spending limitations make it difficult to provide adequate services to participants and employers simultaneously. Most of NTPIC’s current EDWAA participants need a great deal of basic support services. This means that relatively fewer resources can be devoted to readjustment (job search and placement-related activities) because funding restrictions in the law require that a certain amount of the funding be spent on retraining. NTPIC officials said they would like additional flexibility not only to provide participants with the particular services they need, but also to carry out additional activities to inform small employers of the program. TPIC officials said small employers’ low awareness of the program attested to the strength of the self-directed job search they advocate for the EDWAA participants. Furthermore, they said participants often do not want prospective employers to know that they received training through a federal training program and this should be respected. Officials said that they did not believe reaching small employers was a problem; if the employers were pleased with the EDWAA participants, then the program was successful. They noted that they perhaps would do more overall marketing of the program if they had additional funding but were not sure how these activities would be funded under the existing spending limitations. Labor has few mandated program performance requirements that local sites must meet to continue to provide EDWAA services. Both of the program sites we visited used, or were in the process of developing, additional outcome requirements that they believed were more accurate indications of program effectiveness than federally required measures. Most of these indicators were participant focused rather than employer focused. In addition, Labor’s direct monitoring of the program is minimal; instead, it depends on the states to evaluate program effectiveness at the local level. State JTPA officials said they also allow local organizations great latitude in operating EDWAA. JTPA requires that each state submit a job training plan every 2 years that lays out EDWAA and other JTPA program goals and the activities to be done to meet those goals. Local and state program officials that we spoke with questioned the effectiveness of these plans because events change dramatically in a 2-year period and ensuring that these plans reflect those changes is difficult. The only performance standard for EDWAA is an entered employment rate, which Labor has set at 67 percent. At the national level, this has been achieved for the last several years.According to Labor officials, the lack of program performance measures is due to the emphasis on performance measures and program activity for Labor programs that target other types of unemployed workers, who are more difficult to reemploy. For national reporting purposes, Labor also collects program information. These program and budget data are routinely collected by the state agencies for transmittal to the federal Department of Labor. The program sites in our study, as well as the states, were trying to use additional measures to assess program performance. Tennessee officials said that the state is developing outcome measures for all state Labor programs. Meanwhile, NTPIC uses placement wages and continually increasing placement rate goals for the caseworkers to meet; salary increases for NTPIC caseworkers are tied to these placement rate and wage goals. Furthermore, NTPIC tries to ensure that participants reach at least 75 percent of their prior wages within 2 years of being reemployed. NTPIC also does a customer survey, which, along with the follow-up by the University of Memphis, helps NTPIC track program participants’ success. NTPIC does not currently survey employers served through the program. Oregon has also instituted additional measures to monitor the success of EDWAA. The state recently instituted a goal that dislocated workers reach 90 percent of their prior wages within 2 years of reemployment. In addition, TPIC’s DWP contract specifies particular characteristics of individuals to be targeted, served, and placed and the wage level acceptable for placement. The contract also requires an 80-percent entered employment rate, which is higher than the Labor requirement. The DWP contract also includes general objectives to be met, such as expanded recruitment, increased earnings recovery and high wage placement, expanded participant choice of services and training, reduced unemployment time, improved geographic access to services, increased employer involvement, and improved evaluation capacity. As previously stated, TPIC officials are also considering doing a 1-year follow-up with participants. TPIC does survey participants to assess their satisfaction with the services provided; although it has discussed surveying employers served through the program, it has not yet developed this survey. Most monitoring of the EDWAA program is done by the state, according to federal Labor officials. Typically, however, the state is not involved in local sites’ day-to-day operations and gives them great latitude. State JTPA officials said the states conduct yearly monitoring reviews. Tennessee JTPA officials said they provide technical assistance and get involved only if the site has problems. Otherwise, they believe the local organizations know how best to run the programs. Oregon JTPA officials agreed, saying that because they respect local control, they do not get involved in contract monitoring or service delivery issues. Officials at both of the local program sites agreed, saying that the activities of their state administrations were helpful when needed but, generally, the state did not interfere with local operations. Almost every state operated one or more economic development programs as of 1995. These programs often provided funding for employers to provide customized training to their existing workforces and in some cases to new hires on the premise that a skilled workforce is a major part of job retention and overall economic development. As of 1995, 47 states provided over $350 million for training programs where employers retrained their existing workers and in some cases trained new hires. Typically, these programs were funded from general revenues; seven states, however, funded their economic development programs through mandatory employer payroll taxes. To participate in these programs, employers often must meet specific criteria and comply with rigorous administrative and other program requirements. The funding received, however, can be quite significant and is a strong incentive for employers to comply with program requirements. In addition, some states have developed ways to make these programs more accessible to small employers. This appendix describes two state programs that serve small employers, the efforts they have made to foster small employer participation, and how they measure program effectiveness. These programs are California’s Employment Training Panel (ETP) and New Jersey’s Office of Customized Training (OCT). (App. I describes how we selected these two programs.) Both of these state programs aim to meet their objectives by working directly with the employer community; that is, they work with employers to encourage the training of existing workers and, in some cases, new hires. Although neither of these programs provides training instruction, they receive and review employer requests for training funds, work with employers to assess training needs, provide training funds, and ensure that worker training is appropriate. Both programs require a significant amount of employer involvement; employers typically define their own training needs, develop the type of training plans they feel are best, and select the training vendors. In addition, to participate in this program, employers or groups of employers working in conjunction with unions or other entities (called consortia) must adhere to significant administrative and other program requirements. Program officials noted that such requirements are mandated by the program legislation for accountability purposes. If employers or groups of employers or other organizations acting on behalf of employers (consortia) wish to receive ETP funding for training, they must comply with rigorous program requirements. They must apply directly to ETP and, in the application, provide information on the (1) main activity of the business or businesses involved in the training project; (2) reason for training funds; (3) type of training to be done and the number of trainees; (4) approximate cost of the training project, including administrative expenses; and (5) career potential and substantial likelihood of long-term job security offered by the employer(s) involved in the training project. Once the application is completed, applicants may have to appear before ETP’s eight-member governing board, which makes the decision to fund a proposed training project (if the project is $100,000 or more). Program officials said the application process takes about 5 months to complete. ETP allows the employers to provide training directly or to contract it out to another organization. It also requires employers to sponsor worker training during normal work hours. Allowable training may include a combination of classroom, laboratory, and structured on-site training of at least 40 hours. The training provided under ETP projects has included training in areas such as office automation, management skills, statistical process control, total quality management, customer service, and production technique courses. Small employers we interviewed who were participating in this program said these paperwork requirements were quite burdensome because of the level of detail required and the great deal of time required to complete the application. This did not discourage the employers from participating, however, because they recouped these costs in the training funds ultimately provided. They said other small employers may not be willing to endure the process to receive the training funds, however. Program officials acknowledged that the amount and detailed nature of the information required may discourage small employers from participating in the program because they may not want to spend the time necessary to apply for the funding, even though they may recoup their costs in the end. ETP officials said that about 37 employers terminated their ETP training projects in fiscal year 1995 because of the amount of information required to participate in the program and the length of the application process. Some employers noted also that the requirement to provide training during normal work hours was a problem. They either had problems getting their workers to take time away from their duties to attend training or having them in training during normal work hours was too costly. Like ETP, OCT requires single employers or groups of employers acting like consortia to apply directly to OCT for training project funds. OCT’s mandate is to focus on projects that will (1) substantially enhance workers’ skills and earning power, (2) prevent job loss, (3) not replace or duplicate approved apprenticeship programs, and (4) not result in trainees’ displacing current workers. Program officials said, however, that the program is very flexible about decisions made on the types of training projects funded. According to OCT’s guidelines, training may be provided by the employers directly or be contracted out to another organization. Like ETP, OCT encourages employers to allow training to take place during normal work hours. Typically, OCT-sponsored training occurs at the work place with some classroom instruction. The training funded by OCT has included technical instruction, remedial education, and occupational health and safety. In general, participating small employers we spoke with did not believe the requirement to allow training during normal work hours was a problem. Furthermore, a consortium official said that training during work hours shows employer commitment, which is essential for program success. To apply, employers must prepare a two-part application. The first part requires that applicants describe (1) the training-related problems experienced by the employer(s) involved in the project; (2) how the proposed training will address the problems; (3) the impact if state financial assistance is not provided; (4) the type of training proposed, the trainer, and the number and type of trainees; and (5) an estimate of the total training cost. If the first part is approved by OCT staff, applicants then submit a second part, which requires a more detailed training plan, a line-item budget explaining all training-related costs, and the employer’s overall human resource objectives. OCT staff provide technical assistance to help applicants prepare the second part of the application. This part of the application is then reviewed by OCT staff; the director of the Workforce Development Partnership program; and the assistant commissioner, deputy commissioner, and commissioner of New Jersey’s Department of Labor. In addition, staff from the New Jersey Department of Education review the qualifications of the proposed trainer. Program officials said the entire application process can take 6 months or more. As in ETP, some small employers that participated in the program said the amount of information required and the time it took to complete the application were costly and burdensome. One employer said the entire process took over 2 years from start to finish and over a year for the application process alone. During the training projects (which usually last 2 years for ETP and 1 year for OCT), program staff routinely visit the employers to monitor project activities. For these visits, employers must provide information such as the following for program review: training schedules, curricula, record-keeping procedures used, daily documentation of training, wage invoices, and subagreements. In addition, program staff may interview the trainers and trainees; observe a training session; and ensure that budgeted and required training staff, equipment, supplies, and materials are available. OCT also requires employers to prepare a close-out report and an impact statement. The close-out report compares what was planned for the training project—such as enrollments, job creation, and job retention— with the actual figures. The impact statement is prepared 6 months after training and describes the training project’s impact on overall operations. ETP requires that a trainee complete training and a 90-day employment retention period before it reimburses an employer. Employers, however, may elect to receive progress payments. For example, employers are reimbursed 25 percent of the training cost when trainees are officially enrolled in the training courses, another 50 percent when trainees complete the training, and the final 25 percent when trainees have been retained in the jobs for which they were trained and at the agreed-upon wage for at least 90 days. OCT reimburses employers for their costs throughout the training project, although these payments have no set schedule. Generally, the employers are reimbursed when they submit invoices to OCT, but employers do not receive their final reimbursement until they complete the close-out report. These programs were designed to work with employers for economic development purposes. As such, officials at both programs acknowledged that small employers are an important part of the program. In 1994, ETP’s mandate was amended to focus on employers with 250 or fewer employees. In fiscal year 1995, 72 percent of the employers served through ETP projects employed 100 or fewer employees. In fiscal year 1995, about 48 percent of the employers served through OCT projects had 100 or fewer employees. Program officials at both programs acknowledged that small employers may have a more difficult time participating in their training programs because of the administrative and program requirements. Program officials said they realize that small employers often lack human resources personnel who can spend time completing paperwork and other requirements. They also noted that small employers often lack the expertise and resources to identify their training needs and develop curricula, which are often needed before they can participate in the programs. These programs, however, do not have the additional resources needed to address the special needs of all small employers. So both programs actively encourage the use of consortia not only to help small employers identify their training needs so they can access the program, but also to reduce participation costs by helping with the application and other administrative requirements. In fiscal year 1995, 95 percent of all employers served by ETP were served by consortia, and most had fewer than 100 employees. Table III.2 shows that 74 percent of the employers receiving ETP training projects through consortia had fewer than 100 employees. Employer size (by number of employees) In ETP, a variety of public and private groups may serve as consortia, including (1) employer associations, (2) private training agencies, (3) joint apprenticeship training committees, (4) educational institutions, (5) private industry councils, and (6) primary employers acting on behalf of other employers. Of the different types of consortia used by small employers that received ETP training projects, most were private training agencies and educational institutions, such as community colleges. We visited two consortia in these categories—the Foundation for Educational Achievement (FEA) (a private training agency) and Glendale Community College (GCC). GCC is ETP’s largest consortium. Together with FEA, it served 23 percent of all of ETP’s consortia employers. Almost half of the employers served by FEA and GCC had 50 or fewer workers. FEA officials said they targeted employers with fewer than 50 workers because they typically cannot access ETP funds directly. These employers, according to FEA, face inherent limitations, such as limited budgets and staffs, very small numbers of trainees, and the lack of time and resources to develop their own training projects. In fiscal year 1995, OCT granted training projects to five consortia, which served about half of all the employers participating in OCT. In addition, as shown in table III.3, 73 percent of those employers served through consortia in OCT had fewer than 100 employees. Employer size (by number of employees) Provided they are located in New Jersey, consortia in OCT can be formed by one or more (1) educational institutions, (2) individual employers, (3) labor or employer groups, (4) community-based organizations, or (5) training organizations. We interviewed one of OCT’s consortia, National Training Associates (NTA), which provides or procures training for employers who need assistance accessing training. The participating employers we spoke with who were using consortia said the activities performed by the consortia significantly minimized the various costs of program participation. For example, in ETP, the consortia provided employers with help in completing the application and other paperwork and, in some cases, did the paperwork for the employers. Other services done by the consortia included job placement for new-hire trainees and follow-up activities required after training. Employers we spoke with who were participating through these consortia noted that they did not want to spend the amount of time necessary to fill out the paperwork and using consortia significantly reduced the time necessary. In OCT, consortia activities included identifying training needs, helping to prepare applications, developing curricula, administering the contract, and carrying out the follow-up activities after training is completed. In some cases, the consortia are paid for their efforts; for example, NTA receives an agreed-upon portion of the training funds provided to the employers. NTA said one of the key benefits it provides to small employers is doing the paperwork for them. NTA officials believed that the primary hurdles to participation in such programs are the application and monitoring procedures. They have seen employers walk away from a nearly completed application process, exasperated by rule changes and the need to continually revise proposals. They also said that protracted review and approval cycles greatly compound the problem. The participating employer we spoke with agreed, saying the use of this consortium significantly reduced the time spent completing the paperwork. OCT also has developed networks with a variety of organizations to enhance the services provided to small employers. For example, at OCT’s request, the New Jersey Institute of Technology performs an overall operation assessment of employers applying for OCT funding. The purpose of the assessment is to identify potential problems—beyond training—that may hinder an employer’s operation. Such problems could include inadequate marketing strategies or management procedures. Because the consortia also contact employers about participation in these programs, often help them develop training curricula, and, in some cases, conduct or procure the training for the employer, the consortia reduce many of the problems small employers have in identifying training needs and available resources. One employer said he did not know of the program until he had been contacted by NTA, one of OCT’s consortia. Furthermore, he said he was only involved because of the consortium’s help in complying with program requirements. He noted that other employers that have not been contacted for participation in this program may not know of its existence. Several employers said that, without consortia assistance, they may not have been able to develop as good a training plan because it would have taken too much time and been too costly. Although these programs strongly encourage consortia to help small employers with the application and monitoring procedures, program officials realize that not every employer can be involved with consortia. Because of this, the programs provide significant technical assistance to applicants at several points during the training project. First, program staff visit employers and walk them through the application. Second, during the periodic visits to employers during the training period, program staff provide any necessary assistance to ensure that the training is adequate. Finally, the programs also provide any technical assistance needed to comply with any final requirements to receive the remaining training funds. According to several employers we spoke with who were participating in the programs, this technical assistance reduced the time they spent on the paperwork. In addition, ETP officials said a 1995 survey of 114 participating employers showed that employers favorably evaluated the technical assistance provided by ETP staff. Program officials said the greatest benefit of these programs is that they achieve economic development by creating new jobs and maintaining existing employment. They also noted that critical to their programs’ success was the significant employer involvement in the determination, conduct, and impact of training. Both programs have been evaluated by outside sources and been found to have contributed to local economic development. (We did not independently verify the program results found by these researchers.) Several studies conducted on the long-term effects of ETP’s training program are the bases for judging its success, according to ETP officials. One of the most recent studies concluded that ETP’s training had a positive impact on trainees and the California economy. Researchers found that workers who completed ETP training were more likely to remain in the California labor market than either trainees who dropped out or randomly selected workers from similar industries. In addition, ETP trainees who completed training had larger earnings increases than workers who dropped out or comparable workers. For example, the earnings of ETP “retrainees,” trained in 1991, increased by $330, and the earnings of new-hire trainees increased by $2,650 a year after training. Comparable workers’ earnings decreased by $500 during the same time period. Researchers also found that ETP training contributed to reduced state unemployment and increased worker productivity, both of which positively affected the California economy. The number of jobs that OCT has created or saved is the basis for judging its success, according to OCT officials. They said that training funds awarded in fiscal year 1995 involved eight employers who either relocated to New Jersey or were new businesses in the state; together they created 3,231 new jobs. In addition, OCT provided funds to three employers that expanded their New Jersey facilities significantly, creating 532 jobs. In addition, a recent study that evaluated the Workforce Development Partnership Program also examined OCT’s operations. Researchers concluded that employer interest in OCT was increasing and that OCT’s training funds were increasingly becoming a part of incentive packages provided to businesses. Apprenticeship combines theoretical instruction with structured OJT, leading to certification of workers as journeymen. In the United States, apprenticeship is primarily a private-sector program operated by employers, employer associations, or jointly by employers and labor unions. In this way, most apprenticeship programs are operated like consortia, where groups of employers or employers and unions join together to provide training. Apprenticeship programs can be registered with the federal or state government but many nonregistered programs also exist. This appendix describes how two federally registered apprenticeship programs provide training to apprentices, how the programs serve small employers, and how their consortium-like structures benefit these employers. Federal participation in apprenticeship training began in 1934, when the Secretary of Labor established the Federal Committee on Apprenticeship to serve as the national policy-recommending body on apprenticeship. Three years later, the National Apprenticeship Act (also known as the Fitzgerald Act) was passed to protect the welfare of apprentices, promote the establishment of apprenticeship programs, bring together employers and labor to create apprenticeship programs, and cooperate with state agencies in formulating apprenticeship standards. The Department of Labor’s Bureau of Apprenticeship and Training (BAT) is responsible for carrying out these goals. With a budget of almost $16 million in fiscal year 1995, BAT staff provide support services to program sponsors, promote apprenticeship, enforce equal opportunity standards, and register apprentices in 23 states. The Secretary of Labor also recognizes the authority of State Apprenticeship Councils to register local apprenticeship programs in conformance with federal standards in 27 states, the District of Columbia, Puerto Rico, and the Virgin Islands. Under the federal Davis-Bacon Act, employers in the construction industry may pay registered apprentices less than the prevailing wage rate on federally funded construction projects. As we reported in 1992, this is a major incentive for registering an apprenticeship program with the federal government. Some states and localities have similar regulations permitting reduced apprenticeship wage rates. In fiscal year 1995, over 355,000 apprentices participated in 34,000 registered apprenticeship programs. Apprenticeable occupations numbered over 800, but two-thirds of all apprentices participated in 25 occupations—mostly in the construction and manufacturing industries. Although some apprentices were female or minority, the majority were white males between the ages of 22 and 29. BAT officials said most apprentices were registered in large programs sponsored jointly by employers and labor unions, although many small employers also sponsored apprenticeship programs. To become a registered program, employer sponsors must design a program that meets BAT requirements. For example, apprenticeship training must include at least 144 hours of job-related classroom instruction a year and at least 2,000 hours of supervised OJT. Apprentice wages must be increased at least every 6 months until the apprentice reaches 85 to 90 percent of the rate paid a journey worker in the occupation. An apprentice who completes the program receives a certificate of completion—a portable credential showing that the person has attained certain competencies that employers understand. Employers do not have to retain the apprentice upon completion of the training. Employer sponsors must also comply with equal employment opportunity rules to prevent discrimination. Other requirements cover, for example, age, supervision, and evaluation. To register an apprentice, employer sponsors must develop a written agreement containing basic information on the apprentice and the program, such as the wage schedule, the terms of the apprenticeship, and classroom training hours. This apprenticeship agreement is signed by the employer and the apprentice and filed with BAT. Other than this agreement, program sponsors have no statistical reporting requirements. We visited two federally registered apprenticeship programs that trained apprentices in several of the top 25 apprenticeable occupations. One program was a nonunion operation administered by the Tooling and Manufacturing Association (TMA) in Park Ridge, Illinois. The other was a union program administered by the Employing Bricklayers Association of Delaware Valley (EBA) in Plymouth Meeting, Pennsylvania. (See app. I for how we selected these sites.) Officials at both programs said the major goal for the programs was to ensure a pool of qualified workers for their industry and geographic areas because younger workers were needed to replace an aging workforce and new and existing workers had to be trained to keep up with the latest technology. TMA, a trade association of 1,485 tool and die and machining employers in the greater Chicago area, established an apprenticeship program in 1934. In addition to apprenticeship training, TMA offers many services to its members, such as group medical insurance, payroll services, 401(k) plans, and UI administrative services. TMA also collects and provides local labor market information and offers various types of training and management seminars. According to TMA officials, one of the primary reasons members joined was the availability of the apprenticeship training. In 1995, TMA had 600 registered apprentices working for 290 of its member employers. Most of these apprentices were white men not older than 25. This number does not include apprentices who had finished the classroom portion of the training and were still accumulating OJT hours, which TMA does not track. TMA officials said one of the major goals of the apprenticeship program was to reduce the shortage of precision metalworkers. Getting new or young workers to enter the industry is difficult. The industry suffers from a negative image, despite the advanced skills required and the competitive wages offered. Employers in the area have difficulty finding entry-level and skilled workers. So the apprenticeship program is an effective way for employers to obtain new workers and ensure that they are adequately trained. TMA’s employer membership is mostly nonunion, and TMA’s operating expenses are funded primarily through member dues. Employers who join TMA must pay a $75 initiation fee and annual dues based on a sliding scale, depending on the number of toolroom employees and total employees. Average dues are about $750 per employer. Employers also pay tuition fees for apprentices—about $375 a year for each apprentice. Tuition fees do not cover program costs, however, and TMA loses about $400 per student; as a result, employer dues are used to subsidize apprenticeship activities. Officials and employers believe it is very important to maintain the program because it benefits all employers in the industry. EBA was created in 1984 by a group of union construction contractors. Today, EBA is an independent association of 60 union employers, most of whom are subcontractors hiring bricklayers and laborers in the greater Philadelphia area. EBA’s apprenticeship program was registered in 1947 and originally sponsored by the local union’s Joint Apprenticeship Committee (JAC). Besides overseeing the apprenticeship program, EBA also negotiates the collective bargaining agreements and manages all benefit trust plans. As of 1996, EBA had 36 registered apprentices, 18 of whom were attending classes; the other half had completed the classroom training but were still accumulating OJT hours. The apprentices were all men with an average age of 26; 46 percent were minorities. EBA officials said the apprenticeship program serves employers’ needs for a skilled workforce. Because the construction industry is cyclical and depends on economic, seasonal, and other factors, EBA’s apprenticeship program also helps regulate the supply of incoming workers by accepting apprentices only when existing workers have enough work. For this reason, EBA accepts apprentices who are not currently employed by a member employer. For example, only one-third of the current first-year apprentices are employed with EBA member employers. Finally, program officials and participating employers said another major benefit to participating in the EBA program is that under the Davis-Bacon Act, employers may pay registered apprentices in training less than local prevailing wages on federal construction projects. EBA’s funding is provided through employer contributions, which are set by the collective bargaining agreement. The agreement determined that employers would be assessed an hourly contribution of 37 cents. Currently, a reserve account is funding apprenticeship, but when this account is depleted, a new fund will pay for apprenticeship training costs. This fund covers promotional efforts and apprenticeship costs, such as tuition, tools, classroom space, and teacher salaries. Apart from these contributions, employers pay no additional direct costs to train apprentices. Both of the programs we visited served employers and workers in similar ways—they screened and accepted apprentices, developed a structured training curriculum, and required employers to provide OJT to the apprentices. Both programs managed all the administrative details and paperwork for screening, selecting, and registering individual apprentices, as well as retaining certification of the apprenticeship program. They were also responsible for ensuring that the programs complied with equal employment opportunity rules. All participating employers we spoke with said that TMA’s and EBA’s activities reduced the costs they would incur on their own of finding and training employees. Individuals may learn about apprenticeship in many ways. Program officials said applicants for the TMA and EBA programs generally hear about the programs from friends, family, or vocational education and trade schools. Both programs also advertise their apprenticeship training programs in newspapers and other venues, and employers may contact the associations in search of an apprentice. TMA apprentices must be employed by a member employer to apply for the program. Conversely, EBA accepts candidates who are not currently employed by a member employer. Once an apprentice is accepted, however, EBA tries to place the individual with a member employer. Applicants must have a high school diploma or equivalent, take reading and math tests, and demonstrate good work habits and communication skills. Candidates complete an application form; TMA requires a resume and EBA conducts an interview. EBA also requires that its apprentices have a car and driver’s license. When apprentices are accepted into the EBA program, they must join the union. Once an applicant is approved, the applicant and the employer sign the apprenticeship agreement, which is sent to BAT. EBA also sends a copy to the union and keeps BAT informed of cancellations, drop outs, and completions. TMA employers must complete a simple one-page form of basic information on the apprentice. Program officials and participating employers said this selection process weeds out workers with poor skills or work habits and identifies qualified apprentices. Participating employers added that it saves them significant time and resources by relying on TMA and EBA to recruit and screen job applicants. EBA and TMA have developed industry-based curricula related to the needs of member employers and employees. TMA’s curricula cover related theory and design classes on tool and die making, moldmaking, and precision machining. They were developed by experienced employees involved with TMA and correspond to national industry skill standards. The curricula are updated by TMA’s apprenticeship committee, which is made up of experienced journeymen. EBA’s masonry curriculum was originally developed by the JAC and is revised as necessary to meet changing industry needs. In both programs, apprentices attend classes 2 nights a week for 3 years to accumulate the required number of classroom training hours each year. Classes are taught at night on the employees’ own time. In both programs, classes are taught at local community colleges or high schools, where the associations rent classroom space and equipment. The classes are taught by experienced journey workers who are hired and paid by TMA and EBA. TMA apprentices are simultaneously enrolled in the community college and can earn college credit for the classroom training and, in some cases, for OJT. Toward this end, TMA does all the paperwork and other activities associated with registration and grade distribution. Both programs have attendance policies and have expelled students when necessary. Both programs also provide grades to the apprentices. TMA apprentices who complete the coursework are recognized in a graduation ceremony and receive a certificate. Employers we interviewed believe that the classroom training is well structured and meets their needs because the curricula are industry based. They said that teaching the classes at night on the employees’ own time facilitates their participation for several reasons. First, they could not afford to pay workers wages while they were training. Second, they could not afford to have workers away from the job during the day because the productivity costs would be prohibitive. Several employers said they would not be able to participate if training were held during the day. In addition, all of the participating employers we interviewed said the costs associated with the apprenticeship program are reasonable and affordable. TMA employers said tuition fees are lower than they would be for the same courses at the local community college. Participating employers in both programs said they lose money in the first years of training but believe it is cost-effective in the long run if they retain the worker and that training is the best way to gain a skilled worker. Federal apprenticeship programs must provide apprentices with OJT. TMA’s program requires 8,000 to 10,000 hours of OJT, which takes 4 to 5 years; EBA’s program requires 4,000 hours, which takes 3 to 4 years. OJT is supervised by a journey worker at the job site in both programs. Program officials want apprentices to learn a broad range of appropriate skills. Yet they cannot closely monitor OJT because officials at both TMA and EBA do not believe it is their role to tell member employers how to train apprentices. However, TMA provides a guide to employers that outlines tasks and performance criteria, and EBA recently developed a report card for apprentices to record their activities and the time spent on various tasks, which provides some oversight of OJT. Program officials and all the employers we interviewed believe that OJT is essential to the success of apprenticeship and the education of an apprentice. Participating TMA employers said that providing OJT has some costs such as lowered productivity of the apprentices and of the supervisors who oversee apprentices. They noted, however, that they would be incurring these costs anyway, since they cannot find new workers with the necessary skills. They believe that conducting OJT through the apprenticeship program is cheaper and more effective in the long run than doing it on their own. Providing OJT is more difficult to achieve in EBA than in TMA. As stated before, not all EBA apprentices must be employed by a member employer to participate in the program. Often, EBA tries to find employment for apprentices after they have been accepted to the program; however, officials said they currently have apprentices receiving classroom training who are not employed with member employers. EBA officials said finding employers who are willing to take apprentices is sometimes difficult; EBA often places apprentices with the same few employers every year. They said employers may not be willing to incur the additional productivity losses of supervisors who must oversee the apprentices. They also said that often employers will not take apprentices despite the reduced wages that can be paid to apprentices because employers believe the apprentices’ productivity does not justify the wages paid. EBA officials said they believe that these employers were misusing apprentices (for instance, using them for lower cost occupations such as laborers rather than bricklayers); however, EBA cannot force employers to take apprentices. EBA officials said the greatest difficulty in employers’ hiring apprentices, however, is the lack of steady work in the industry. Employers noted that, even in the best of times, experienced workers do not have enough work and it would not be fair to those workers to hire apprentices. Furthermore, hiring apprentices and firing them soon after for lack of work would be unfair. Once apprentices get all the required OJT hours and complete the coursework, they receive a certificate of completion from BAT. TMA also provides a journeyman’s card, which is a local credential. Under both programs, graduate apprentices often continue working with the same employer. Program officials said employer participation is essential to these apprenticeship programs—not only because employers fund the programs but because the training is based on employer needs. According to TMA officials, one of the primary reasons members join is the availability of apprenticeship training. EBA officials said that apprenticeship training is one of the best ways to ensure a pool of skilled workers in the industry. Although program officials do not routinely collect data on the characteristics of employers served by the apprenticeship programs, both TMA and EBA are dominated by small employers. As a result, they tend to have high small employer participation rates in the apprenticeship program. Of TMA’s 1,485 members, about 80 percent have fewer than 100 employees; median member size is 12 employees. Similarly, most of the 290 employers participating in TMA’s apprenticeship program have fewer than 100 employees. In addition to the 290 employers, many others have employed apprentices who have already finished classroom training and are continuing to earn OJT hours. Furthermore, TMA officials estimate that about 900 of the 1,485 members have toolrooms that require trained employees and that most of these 900 employers have enrolled apprentices at some point but, given their small size, do not need to train apprentices every year. In this manner, a large portion of TMA’s members are currently participating or have participated in the past. According to TMA officials, the lack of interested, qualified workers often prevents employers from participating. We contacted several employers, most of whom were small, who were not using the TMA apprenticeship training. They noted that they did not need to train workers now because all of their workers were trained and skilled. They noted, however, that they knew about the TMA program and would use it if they needed to train apprentices. EBA officials estimated that most of the member employers had fewer than 150 employees. Determining their size, however, was difficult because employer size fluctuates greatly in the construction industry because employers have core staff and hire additional employees as needed, laying them off when a specific contract is completed. The largest EBA employer had 200 employees at its peak. EBA said the participation rate in the apprenticeship program has remained fairly constant at 15 to 20 percent of EBA employers. Both small and large employers have participated—usually the same employers every year. EBA officials also noted that changes in technology have affected the rate of participation; the substitution of other building materials for brick has reduced the need for skilled and apprentice masons. Both TMA and EBA are forms of consortia—groups of employers and/or unions working together to obtain training and other benefits to reduce the costs for all involved. The EBA and TMA programs are located in areas with a critical mass of employers with similar needs in the same industry and geographic location. This enables TMA and EBA to provide affordable, accessible services to employers who may not otherwise be willing or able to train on their own. The cost and other advantages of these consortia are clear to program officials and participating employers, especially small employers. Participating employers we interviewed said that TMA and EBA reduced many of the economic costs associated with finding qualified employees or training employees. One employer, an owner of a small tool and die operation without a human resources department told us that when he needs new workers, he has to run a newspaper ad, answer telephone inquiries, and interview candidates. He does not have time for that, since he needs to work on the shop floor along with his employees. By relying on TMA to identify and screen workers, he can get qualified workers cheaply and easily. TMA employers also reported that being in an association with other employers helps them network and learn about new developments in the industry. Participating employers in both programs said that training apprentices puts them at a competitive advantage over other employers and training costs are outweighed by the long-run benefits of gaining skilled, loyal workers. First, training costs are subsidized by employer dues or other contributions. For example, tuition fees at the community college were more than TMA’s tuition fee. Second, classroom training is scheduled on the employees’ own time. One employer noted that even though the local community college had a similar program to TMA’s, he would not use it because training was only offered during the day and he could not afford to lose employees’ productivity. Third, programs provide services, such as marketing and outreach to find workers and job screening, that save employers time and money. Finally, EBA officials said small employers benefitted by being able to pay apprentices in training lower wages than other employees. Participating employers said they would still provide OJT without these programs but it would be more expensive and they would not teach apprentices what they learn in class. Finally, program officials and participating employers said the consortium arrangement protected employers who trained from losing workers to other employers. Since all employer members are investing in training, the incentive for an employer to steal trained workers from other employers is reduced. Also, because training costs are minimized, employers had less fear of losing their investment if they did train. Participating employers noted that the consortia relieved several barriers common to small employers when trying to identify general training needs and sources. For example, one employer noted that he does not have time to call all the local schools and find out what classes they offer, pore over course catalogs, and fill out registration forms and other paperwork for his employees. He also noted that the community college training included classroom training as well as OJT, which he really did not need. He said that TMA “put it all together,” not only by providing him the training he needed when he needed it, but by saving him countless hours of determining what to teach. He noted that if TMA weren’t there, he probably would not enroll his employees in structured classroom training. Participating employers we spoke with noted that they would not participate in any program that requires lots of paperwork. Having the associations handle all tasks associated with registering apprentices, developing the curricula, and monitoring apprentices’ progress relieves them of having to do this work and makes participation easy. Federally registered apprenticeship programs have no performance requirements, although BAT conducts many activities related to apprenticeship. It maintains apprenticeship agreements and provides national data on apprenticeship training. It reviews compliance with equal opportunity rules, occasionally visits apprenticeship programs, and seeks to identify programs using apprentices incorrectly. However, it has very few ways to enforce standards. We did not assess the effectiveness of BAT’s efforts. TMA does not have performance measures or track apprentices, but officials said enrollment and completion data indicate program performance. Officials said employers continue to enroll apprentices if they reap benefits: they pointed to the 600 apprentices as proof of program success. Enrollment of first-year apprentices declined from 259 to 158 between 1989 and 1995, while completions increased from 55 to 77 percent during that time. Program officials explained that fewer people were accepted into the program because admission standards rose, resulting in stronger candidates with a greater probability of finishing the program. TMA monitors classroom training through student evaluation of instructors, employer feedback, and visits to classrooms twice a year. Instructors have been fired in the past on the basis of these monitoring activities. Although EBA has no specific performance measures, it relies on feedback from employers and apprentices about the program and takes corrective action as necessary. Although only one-third of EBA’s first-year apprentices are employed with a member employer, apprentices who have finished the classroom training and are earning OJT hours are all employed by a member employer. EBA officials said the difficulty they have placing apprentices is not due to the quality of the program but the lack of steady work in the industry. OJT is not generally monitored by BAT, TMA, or EBA. TMA and EBA program officials said they cannot tell employers how to train workers. TMA and EBA officials said some employers use apprentices as cheap labor and others might not teach them all aspects of the trade. Program officials said that in some cases apprentices have finished their OJT hours without learning all the skills they should have. EBA’s report card, which apprentices use to track their activities, helps ensure that they are receiving adequate training. Revae Moran, Issue Area Manager Dianne Murphy-Blank, Senior Methodologist Nancy Kawahara, Senior Evaluator Tom Jessor, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed small business employers' participation in federal training programs, focusing on: (1) the extent of their participation; (2) barriers that limit their participation; and (3) options for overcoming those barriers. GAO found that: (1) small businesses are less likely to use training programs than larger employers because of the prohibitive costs associated with training programs, the time needed to comply with administrative requirements, the reduced productivity of workers during training, and high attrition rates associated with newly trained employees; (2) training programs' institutional requirements may discourage or disqualify small employers from participation; (3) small employers' limited knowledge of the training programs and their needs may further prevent their participation; (4) while a reduction of institutional barriers may require fundamental changes to training programs, alliances among small employers have helped to reduce economic and informational barriers to their participation; and (5) technical assistance from training programs helped reduce economic barriers to participation for small employers that did not want to become involved with alliances or had no access to alliances.
Both the UN and United States have a long history of peace support operations upon which to base cost estimates. The UN has carried out 60 peacekeeping missions worldwide since 1948. For each mission, the UN Department of Peacekeeping Operations (DPKO) prepares a budget, which is reviewed in detail at high levels of UN management and is ultimately approved by the General Assembly. The UN assesses each member for its allocated portion of this amount based on the country’s per capita gross national income and its membership status on the Security Council. The United States currently pays about 27 percent of the total for each mission, and in fiscal year 2005 directly contributed about $1.3 billion in support of UN peacekeeping operations overall. The United States has also led and participated in a variety of peacekeeping operations since World War II, most recently in Somalia, Haiti, Bosnia, and Kosovo. U.S. military operations are funded largely by DOD appropriations, and, under DOD regulations, the budgets are based primarily on cost estimates generated with the department’s Contingency Operations Support Tool. This computer model uses financial formulas that draw upon a database of historical costs from past military operations and other regularly updated cost information. In addition, the State Department has extensive experience posting foreign service officers in conflict areas and funding U.S. police officers to support UN peacekeeping missions, for which it maintains cost estimation formulas and historical cost databases. The UN Security Council has authorized five peacekeeping missions in Haiti since 1993, of which the United States has led two between 1994 and 2004. The primary task of the ongoing MINUSTAH operation is to provide a secure and stable environment through its military and police presence and operational support to the Haitian National Police. MINUSTAH assists the transitional government in police reform and institutional strengthening; disarmament, demobilization and reintegration; elections monitoring; and promotion and protection of human rights and the political process. The initial authorized force strength was 6,700 troops, 1,622 civilian police officers, and 1,697 civilian administrators and staff. Although initially authorized for 6 months, the UN Security Council has renewed the mission’s authorization and funding through June 2006. Criticism and controversy, including allegations of sexual misconduct of peacekeepers, have brought calls for reform of UN peacekeeping operations within the UN and from U.S. observers. In 2000, the UN Secretary General convened a high-level panel to review UN peace and security, which recommended a variety of reforms. The 2005 bipartisan Task Force on the United Nations highlighted the need for more rapid deployment and more clearly defined mandates. Proposed legislation, the Henry J. Hyde United Nations Reform Act of 2005, calls for more oversight and investigation over UN operations and mandates that the UN adopt and enforce a code of conduct for all peacekeeping personnel. We estimate that it would cost the United States twice as much as the UN to conduct an operation similar to MINUSTAH. The higher U.S. cost of civilian police, military pay and support, and facilities account for virtually the entire difference between our estimate and the MINUSTAH budget, and reflects the additional cost of ensuring high U.S. standards for training, troop welfare, and personnel security. From May 1, 2004, to June 30, 2005—the first 14 months of MINUSTAH— the UN budgeted mission costs totaled $428 million. This budget assumed a phased deployment of 6,700 military personnel, 750 personnel in formed police units, 872 civilian police officers, and 1,184 civilian administrators and staff. It included the cost of personnel, operational support, equipment, facilities, and transportation. Using the same basic parameters of troop and staff deployment in Haiti for 14 months, we estimated that the United States would likely budget about $876 million, nearly twice the UN estimate, for a comparable U.S. peacekeeping operation. (This cost estimate is based on a variety of assumptions, described in detail in app. I.) The United States was financially responsible for $116 million of the budgeted cost of MINUSTAH, based on the U.S. assessed contribution of 27.1 percent of the DPKO regular budget. Hence, we estimate that conducting a U.S. operation similar to MINUSTAH would cost the United States about 7.5 times as much as its official contribution to the UN for that mission ($876 million versus $116 million). Major disparities in the cost for civilian police, military pay and support, and facilities account for virtually all of the difference between the UN budget and our cost estimate. Our estimate reflects the additional expense of paying salaries for personnel that would otherwise be donated by other countries as well as the cost of ensuring U.S. standards for police training, the equipment and welfare of military personnel, and the security of staff posted overseas. (See table 1 for a detailed comparison of the UN budget and our estimate by major cost category.) Civilian police. The UN budgeted $25 million to deploy 872 civilian officers for MINUSTAH, while we estimate that it would cost the United States $217 million to deploy the same number of U.S. officers. The UN does not reimburse countries contributing police for the officers’ salaries and only pays for living expenses, clothing allowance, and death and disability compensation. U.S. costs, however, include salaries, special pay, benefits, equipment, and special training. Furthermore, U.S. officers deployed in Haiti under MINUSTAH are required to meet standards for training, experience, and skills significantly beyond those applied by the UN. For instance, U.S. officers deployed to Haiti must be proficient in French or Haitian Creole and have a minimum of 8 years’ work experience with five years in a position of sworn civilian law enforcement. Candidates must pass several tests that measure physical capabilities and weapons proficiency. UN-sponsored officers deployed to Haiti are required by the UN to demonstrate only the ability to operate a firearm and drive a vehicle; the ability to communicate in French is preferred but not required. Military pay and support. The UN budgeted $131 million for pay and support of military troops, while we estimate it would cost the United States $260 million for the same number of soldiers. The UN costs are based primarily on a per-soldier payment to contributing nations of up to $1,400 monthly for basic pay and allowances, clothing, gear, equipment, and ammunition. U.S. costs include pay and allowances for reservists and active duty personnel as well as clothing, arms, protective gear, and rations. The higher U.S. costs help ensure a basic standard of living for U.S. soldiers and their families and relatively high standards of welfare in the field in terms of equipment, nutrition, health, and morale. For example, estimated costs for food and water for U.S. military personnel total $85 million, compared to $20 million in the UN budget. Medical support for the military and civilian personnel on a U.S. operation would cost an estimated $22 million, over four times the UN budgeted cost of $5 million. According to officials of the Joint Chiefs of Staff, UN multinational forces in Haiti prior to MINUSTAH had difficulty providing adequate troop support and relied on accompanying U.S. forces for supplementary rations and health care. A Rand Corporation study of the multinational force cooperation in Haiti in 1994 indicates that the U.S. forces provided UN forces with intelligence and training, as well as logistical and communications support, including housing, food, transportation, and vehicle maintenance. Facilities. The UN budgeted $100 million for facilities-related costs, while we estimate that the cost to the United States would be $208 million. The UN budget includes acquisition and construction of troop and civilian housing and other facility-related equipment and supplies. While MINUSTAH staff offices are donated by the Government of Haiti, U.S. facilities must meet State Department security standards, which include posting civilian staff within secure U.S. embassy or consulate compounds. In addition to administrative and security expenses, U.S. government agencies with staff in these compounds would be required to contribute a total of about $12 million to the State Department’s Capital Security Cost- Sharing Program, which funds the construction of secure embassies worldwide. Estimated costs in other categories are likely to be similar for the UN and the United States. For example, we estimate that the transport of U.S. troops, civilian personnel, and equipment would cost about $100 million; the UN budgeted $94 million for these costs. Various military and nonmilitary factors can influence the composition of a peacekeeping operation and thus impact the estimated cost. We identified three different military scenarios that could substantially affect the estimated costs of a U.S. peacekeeping operation. Greater concentration of reserve troops could almost double the military costs, while a quicker deployment of forces and higher operational tempo would also increase costs. Further, the addition of nation-building and development assistance activities to the scope of an operation in Haiti would increase the estimated cost substantially. According to U.S. experts in military operations and cost estimation we consulted, various factors could significantly influence the cost estimate for a U.S. peacekeeping operation. These factors include the number of troops and types of military units deployed, the pace of deployment, the intensity or operational tempo, the modes of transportation for deployment, and the mix of active duty and reserve troops. These factors depend heavily on the needs of the operation and demands of other military commitments; decisions about such factors involve complex military, political, and financial considerations that can change rapidly. We analyzed the potential impact of three principal cost factors by altering the assumptions of our cost estimate to reflect (1) military forces comprised entirely of reserve soldiers, (2) deployment of military forces within the first 60 days of the operation rather than 180 days, and (3) higher operational tempo (more intensive use of vehicles and equipment). Figure 2 illustrates how altering the assumptions for these factors affects the cost estimate. Deployment of all-reserve forces. Our base cost estimates assume that the military contingent of a U.S. operation would consist primarily of active duty forces (85 percent). Officials from the Joint Chiefs of Staff confirmed that this is one of a number of possible scenarios, depending on the availability of active duty and reserve troops, ongoing military commitments, specific operational needs, and other factors. A change in this fundamental assumption can have a significant impact on the estimated cost of the operation, as pay for troops is one of the largest components of the estimate. We altered this assumption to reflect an operation comprised entirely of reserve forces, which increased the cost estimate by $477 million. This difference has such a significant impact because DOD does not include regular pay for active duty troops in the cost estimates; the department would incur these costs regardless of whether the troops were deployed in Haiti, the United States, or elsewhere. In contrast, pay for reserve troops is considered a direct cost of the operation since DOD would pay reservists full salaries only when activated for the operation. More rapid deployment. Although the UN Security Council Resolution establishing MINUSTAH calls for an immediate deployment of peacekeeping forces, the MINUSTAH budget reflects full military deployment within 180 days of mission authorization. Thus, similar to the UN budget, our base cost estimate assumes a military force strength below authorized levels during the first six months of the operation. We altered this assumption to reflect full deployment within the first 60 days. We estimate that this would increase U.S. costs by about $60 million, consisting essentially of military pay and support for additional troops deployed during the operation’s initial months. Higher operational tempo. DOD measures the intensity of a military operation, or operational tempo, on a scale from 1 to 3, with normal operations being level 1. The higher the operational tempo, the more heavily the forces use equipment and vehicles and the higher the cost for fuel, operations, and maintenance. Military experts we consulted at the Institute for Defense Analysis and the Joint Chiefs of Staff indicated that a peacekeeping operational tempo would normally be considered to be at level 1.5, as reflected in our base cost estimate. We altered this assumption to reflect a slightly higher operational tempo, level 2, which increased the estimated military costs by $23 million due to increased equipment, maintenance, and other support costs. Our estimate does not include costs for complementary nation-building and development activities, which would be needed to support the economic and political goals of a peacekeeping operation. In 2004, to bolster MINUSTAH, official donors agreed with the Government of Haiti on an Interim Cooperative Framework, to which they pledged a total of $1.3 billion for an array of activities to strengthen political governance and promote national dialogue, strengthen economic governance and contribute to institutional development, promote economic recovery, and improve access to basic services. From July 2004 to March 2005, bilateral and multilateral donors have spent more than $382 million for such activities (see table 2). The United States directly funded over 27 percent of this total, or $102 million, through its bilateral aid programs in Haiti. The United States has made additional contributions to this aid effort through its financial support of UN agencies and multilateral financial organizations, including the World Bank and the Inter-American Development Bank. Table 2 shows the distribution of funding for these activities by donor and type of activity. Our cost estimate assumes that the United States and other donors would spend the same amount on these programs and activities regardless of whether the United States undertook a peacekeeping operation in Haiti. Historically, the United States has depended on other official donors and multilateral organizations to participate in reconstruction and rebuilding efforts following an armed conflict. In addition to cost, other factors would be considered when determining the most appropriate role of the United States and the UN in conducting peacekeeping operations. The United States and the UN each have strengths that can affect the achievement of peacekeeping objectives in Haiti. Past U.S. operations in Haiti have benefited from a strong central communications, command and control structure and a vast military infrastructure supporting its operations, particularly in terms of troop deployment, military intelligence, and public information. Among the strengths of a UN mission are its multinational participation, its extensive experience in peace operations, and a coordinated network of agencies to assist nation building. U.S. peacekeeping operations have benefited from strong communications, command and control structures, direct access to well-trained military personnel and equipment, and other advantages of a large, well-established military infrastructure. U.S.-led peacekeeping efforts in Haiti have been widely recognized as operationally effective, having achieved their military objectives rapidly and with minimal loss of life. As we previously reported, U.S. leadership has enhanced operational effectiveness of UN peacekeeping in Haiti. In the 1995 UN Mission to Haiti, the United States provided leadership to multinational forces that ensured adequate troops and resources were available to carry out assigned tasks, used its command and control structure for the operation, and applied its doctrine for “operations other than war” to help guide actions. Officials from the Joint Chiefs of Staff with experience with more recent multinational forces in Haiti also highlighted rigorous training, a reliable communications infrastructure, and a cohesive command structure as key factors that made U.S. forces operationally effective there. Furthermore, by virtue of the vast U.S. military infrastructure of DOD and other U.S. agencies, U.S. peacekeeping forces have many elements that UN peacekeeping studies have identified as critical for mission effectiveness, particularly in Haiti. In March 2000, the UN high-level panel reviewing UN peace and security identified elements critical to effective peacekeeping. In May 2005, a UN Security Council evaluation of MINUSTAH emphasized the particular importance of three of these elements for operations in Haiti—rapid troop deployment, effective tactical intelligence, and a public information strategy—noting that MINUSTAH was hindered by weaknesses in these elements. Dedicated DOD organizations support U.S. military operations in these three elements, and have contributed to military successes in past operations in Haiti. Funding for these organizations is not reflected in cost estimates in this report because they are part of the infrastructure that supports all DOD objectives and operations, and costs are not readily attributable to specific contingency operations. Rapid deployment. The 2000 UN report on peacekeeping indicated that it was important to fully deploy an operation within 30 to 90 days after the adoption of a Security Council Resolution establishing the mission. According to the report, the first 6 to 12 weeks following a ceasefire or peace accord are often the most critical for establishing a stable peace and a credible new operation; opportunities lost during that period are hard to regain. At DOD, the Deputy Under Secretary of Defense for Readiness is responsible for developing and overseeing policies and programs, including training, to ensure the readiness of U.S. forces for peacetime contingencies, crises, and warfighting. Military readiness of both personnel and equipment is a major objective throughout DOD. The department spends more than $17 billion annually for military schools that offer nearly 30,000 military training courses to almost 3 million military personnel and DOD civilians. With continued heavy military involvement in operations in Iraq and Afghanistan, DOD is also spending billions of dollars sustaining or replacing its inventory of key equipment items. The United States has historically deployed troops in Haiti relatively rapidly. (Fig. 3 illustrates deployment of U.S. marines in Haiti.) For example, in 1994 the United States deployed an operation in Haiti within 60 days of the issuance of a UN Security Council Resolution authorizing the restoration of Haiti’s constitutionally elected leadership to power. The 20,000-member force quickly established itself in 500 locations throughout Haiti and achieved its primary goals within 76 days. Intelligence apparatus. The 2000 UN report on peacekeeping indicated that missions should be afforded the necessary field intelligence and other capabilities to mount an effective deterrence against violent challengers. For its intelligence needs in an operation in Haiti, DOD can draw upon the extensive resources of the U.S. intelligence community, consisting of a wide array of agencies, departments, and offices throughout the U.S. government. The Defense Intelligence Agency, for example, employing over 7,500 military and civilian employees worldwide, produces and manages foreign military intelligence for warfighters, defense policymakers, and force planners in support of U.S. military planning and operations. The Central Intelligence Agency and the U.S. Navy, Army, Marine Corps, and Air Force, among other organizations, also provide intelligence support to U.S. military operations. U.S. forces had these resources at their disposal when they led multinational forces in Haiti in 1994-95, successfully disbanding the Haitian army and paramilitary groups and confiscating the weapons caches held by government opponents within 7 months. Public information. The 2000 UN report indicated that an effective communications and public information capacity is an operational necessity for nearly all UN peacekeeping operations. According to the report, “effective communication helps to dispel rumor, to counter disinformation, and secure the cooperation of local populations.” Furthermore, it can provide leverage in dealing with leaders of local rival groups and enhance the security of UN personnel. The report recommends that such strategies and the personnel to carry them out be included in the very first elements deployed to help start up a mission. At DOD, the Assistant Secretary of Defense for Public Affairs is responsible for developing programs and plans relating to DOD news media relations, public information, internal information, community relations, and public affairs in support of DOD objectives and operations. DOD developed a public affairs strategy that was a central element of the operation it led in Haiti in 2004; it included issuing regular press releases and briefing local and international media frequently on the progress and developments of the operation. U.S. military forces in Haiti were met with relatively little violent opposition, resulting in a minimal loss of life, either Haitian or American. The UN’s strengths in peacekeeping in Haiti are rooted in the multinational character of its operation as well as extensive experience with peacekeeping and related nation building. The UN’s experience has enabled it to develop a structure for coordinating international organizations involved in nation building and give it access to a pool of experienced and skilled international civil servants, including personnel with diverse language capabilities. Multinational participation. The multinational cooperation on UN peacekeeping missions, such as in MINUSTAH, provides some notable advantages. According to a 2005 study sponsored by the Rand Corporation, the UN may have the ability to compensate for its relatively small military presence with its reputation of international legitimacy and local impartiality. Furthermore, its multinational character likely lends the UN a reputation for impartiality that a single nation may not enjoy. The study concluded that this has afforded the UN a degree of success with relatively small missions that include both security and nation-building components. MINUSTAH represents a multinational effort that is not dominated by any single country. (Fig. 4 illustrates multinational peacekeeping operations under MINUSTAH.) During its first year of operation in Haiti, MINUSTAH comprised 7,624 military staff and police personnel from 41 countries. Unlike earlier U.S.-led operations, where the U.S. troops represented up to 90 percent of military personnel, U.S. participation on the ground in MINUSTAH was limited to 29 U.S. military and police personnel—less than 1 percent of the total. As officials of the Joint Chiefs of Staff pointed out, development of coalition partners through multinational operations is important not only for strengthening ongoing and future operations in Haiti, but also for building strong international capacity for facing future military challenges globally. The advantages for the United States include a lower overall cost for peacekeeping and reduced exposure of U.S. personnel to the inherent dangers of operating in conflict zones. However, according to DOD and State Department officials, the multinational nature of a military force may also limit its operational effectiveness by introducing variations in training among the personnel from different nations and difficulties in communications, command, and control. Experienced peacekeeping officials. The UN has developed a cadre of senior officials that has gained experience with peacekeeping and nation- building activities over many missions. While there are acknowledged deficiencies in UN peace operations, the UN established a best practices unit in DPKO in 1995 to study and adopt lessons learned. Senior MINUSTAH officials, including the Chilean UN Special Representative and his deputies, the Brazilian Force Commander, and the Canadian Police Commissioner bring experience in peacekeeping and development activities from diverse geographic areas, and particularly from other countries in the region. The international nature of the UN also provides access to a large pool of civil servants and security personnel with native language speaking abilities and translation skills. In Haiti, 11 French- speaking countries have provided peacekeeping troops and police officers for MINUSTAH. Structure for coordinating international assistance. The UN has fostered a network of agencies and development banks. UN peacekeeping missions can draw directly upon this network in coordinating the extensive humanitarian and developmental activities that are related to operations with expansive, integrated mandates that include nation building. In Haiti, MINUSTAH has established a framework for coordination integral to the mission’s organization. With UN co-sponsorship, official donors in this network, including the World Bank and the Inter-American Development Bank, have pledged $1.3 billion in development assistance. The UN Development Program coordinates the efforts of nine agencies in Haiti, which, during the first year of MINUSTAH, disbursed $60 million in development assistance. To help ensure that these funds are well coordinated and support MINUSTAH’s objectives, these UN agencies operating in Haiti report directly to a senior MINUSTAH official, who also serves as the chief UN Development Program representative for Haiti. While a U.S. peacekeeping operation in Haiti would be more expensive than the current UN mission, it would be subject to higher operational standards and supported by an extensive military infrastructure. Strong, well-trained, and quickly deployed U.S. forces have proven militarily effective in short-term operations in Haiti in the past. However, involving the international community extensively in peacekeeping operations such as MINUSTAH has notable advantages for leveraging development funding, experience, and other resources of nations and organizations. The situation in other peacekeeping missions may differ significantly from the conditions in Haiti, and complex domestic and international political considerations may ultimately weigh heavily in determining the role of U.S. and UN peacekeepers in future operations. Chief among these are the political interests of the United States and other UN member states. We provided a draft of this report to the Departments of Defense and State and the United Nations for their comment. They provided technical corrections, which we incorporated into the report as appropriate, but they had no further comments. We are sending copies of this report to the Secretaries of Defense and State and the Secretary-General of the United Nations. We will also make copies available to others on request. In addition, it will be available at no cost on our Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-8979 or christoffj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO contacts and staff acknowledgments are listed in appendix II. To compare the cost of a specific United Nations (UN) mission with the cost that the United States would have incurred had an operation been deemed in the U.S. national interest and undertaken without UN involvement, we obtained and analyzed cost data from the UN and the U.S. Government. To determine the UN costs for peacekeeping operations, we analyzed the 2004-2005 budget and supporting documents for the UN Stabilization Mission to Haiti (MINUSTAH). We spoke with officials of the UN Department of Peacekeeping Operations and other UN departments, offices, and agencies at UN headquarters in New York about the assumptions, cost factors and ratios, and cost estimation methods used to generate the budget. We used MINUSTAH as our case study because it illustrates the various categories of cost for a contemporary mission located in a country where the United States has an expressed national interest. Additionally, we believe this case provides a strong basis for estimating costs, given the long history of U.S. and UN military intervention in Haiti. We chose the period May 1, 2004 to June 30, 2005 for analysis because it reflected the first approved mission budget and incorporated the initial start-up costs. According to UN officials, the budget provides a reasonable estimate of costs, though actual expenses may vary from the budget. We also discussed with UN officials the methodology for determining the U.S. assessment for MINUSTAH, which was 27.1 percent of the mission budget. We did not include peacekeeping support costs, which are indirect costs allocated to the mission for overhead and administrative expenses incurred outside of Haiti (at UN headquarters and the UN Logistics Base in Brindisi, Italy), as the U.S. Government does not allocate corresponding overhead and administrative costs to individual operations in a comparable way. To estimate the military costs of a unilateral U.S. operation, we developed a comparable U.S. operational scenario based on the MINUSTAH budget and supporting documents, assuming deployment of the same number of military, civilian, and police peacekeeping personnel and aircraft in Haiti over a similar time period of time (14 months). To devise the military portion of the scenario, we interviewed DOD officials and contractor staff involved in developing cost estimates for U.S. contingency operations. The Department of Defense (DOD) Office of the Comptroller and its contractor, the Institute for Defense Analyses (IDA), generated cost estimates for the military components of this scenario using the DOD’s Contingency Operations Support Tool (COST), since DOD financial management regulations designate COST as the department’s common cost-estimating platform. The cost estimate DOD provided included only the incremental costs of the operation—those directly attributable to the operation that would not be incurred if the operation did not take place. We based the scenario, and hence the cost estimate, on the following assumptions, which correspond closely with MINUSTAH budget assumptions and actual UN personnel deployments. Military contingents: 6,594 total personnel divided as follows: Hospital units: 500 personnel Military police: 820 personnel Light infantry: 5,074 personnel less the number of aviation support personnel for 8 UH-60 Black Hawk and 10 CH-47 Chinook helicopters. Type of military personnel: 85 percent active duty, 15 percent reserve. Theater of operations: Haiti. Operation dates: orders provided April 30, 2004; costs end June 30, 2005. Deployment schedule: gradual deployment to theater over 180 day period; 30-day pre-deployment and deployment phase for active duty units and 60 days for reserve units; 6-month rotation period for all units; 7-day re-deployment for all units. Operational tempo: level 1.5 for pre-deployment, deployment, and sustainment. Construction of troop housing equivalent to semi-rigid soft wall dormitory tents. Transportation: departure from Columbus, Georgia, to Port-au-Prince, Haiti; personnel deployed and rotated by commercial air; all equipment shipped by sea. We obtained input on the scenario design from DOD’s Joint Chiefs of Staff, who validated it as reasonable. However, the military component of the scenario and the corresponding cost estimate have some limitations. An actual U.S. military plan may differ significantly from the UN plan, due to differences between U.S. and UN military infrastructures in operations, structure, doctrine, and circumstances at the time of the operation. Additionally, we did not include reconstitution—the cost of returning equipment to useable standards after operation—in our cost estimate, since the UN does not include this cost in its peacekeeping mission budgets, and we assumed that reconstitution would occur after the initial budget cycle on which our comparison is based. Further, some cost factors used in COST, such as some pre-deployment costs and transportation for certain supplies and mail, are based on various contingency operations, such as Operation Iraqi Freedom, and may not be representative of costs in Haiti. To estimate civilian police costs, we obtained and analyzed data from the Department of State’s Bureau for International Narcotics and Law Enforcement Affairs on actual contract costs for providing civilian police to support UN missions. As these contracts do not include the costs for daily subsistence and transportation, we calculated these additional costs based on the U.S. government meals and incidental expense rate for Port-au-Prince and published contract airfare schedules. We applied the average costs per officer to the total number of civilian police officers included in the MINUSTAH budget. Formed police units were not calculated in this manner, as we assumed that such personnel would be provided in the military portion of the operation as military police and are included as such in that estimate. To estimate U.S. civilian personnel costs for the operation, we obtained and analyzed data from the Department of State to determine the average annual cost of a foreign service officer in Haiti during fiscal year 2005, including salary and benefits, office furnishings, housing, residential furnishings, post differential, airfare, shipping, rest and recuperation, danger pay, cost of living adjustments, educational allowance for one child, and miscellaneous expenses. We applied this average cost to the number of non-administrative international staff included in the MINUSTAH budget. (We subtracted several senior executive positions from this number, as the ambassador and U.S. Agency for International Development mission director and other senior U.S. officials already posted to Haiti would likely serve their functions.) To estimate the cost of locally-employed national staff, we obtained staffing information for the U.S. embassy in Port-au-Prince for fiscal years 2004 and 2005 from the Department of State and calculated the average annual salary for locally-employed national staff in Haiti. We applied this figure to the number of non-administrative national staff included in the MINUSTAH budget. We calculated benefits for this staff at 27.6 percent of salaries, per information on these costs provided by the Department of State. To estimate civilian facilities and administrative costs, we obtained and analyzed data provided by the Department of State and the U.S. embassy in Port-au-Prince. The department’s Capital Security Cost-Sharing Program requires agencies posting staff overseas to pay fees into a cost- sharing pool that funds construction of secure embassies and consulates. We used data on these fees to calculate the total cost-sharing fee for the civilian staff in our U.S. operational scenario for Haiti. To determine administrative support costs, we obtained and analyzed cost data from the Department of State’s International Cooperative Administrative Support Services program for the Port-au-Prince embassy for fiscal year 2004. We calculated the average administrative cost per non-administrative foreign service officer and applied this amount to the total number of non-administrative civilian personnel in the MINUSTAH budget. To estimate the cost of deploying civilian volunteers, we obtained and analyzed data from two U.S.-based nongovernmental organizations that contract with the U.S. government to provide volunteers for development and humanitarian activities overseas. These organizations provided the cost estimates for deploying 153 volunteers in Haiti for 14 months, which corresponds to the parameters of the MINUSTAH budget for volunteers. Our cost estimate includes the average of these two estimates. For all of the cost data used in these estimates, we obtained and analyzed supporting information or discussed the data source with the corresponding officials and determined that the data were sufficiently reliable for the purposes of this report. To analyze factors that could substantially affect the estimated costs of the U.S. operation, we developed alternative scenarios and cost estimates, varying one major assumption for each scenario. We identified the assumptions to vary through discussions with DOD and Institute for Defense Analysis officials, who identified those factors they believed, based on their experience, would have the most influence on the cost estimate for the operation’s military component. The three variations we selected were (1) an all-reserve force, (2) deployment of all troops in Haiti within the first 60 days of the operation, and (3) an operational tempo of 2. DOD generated alternative cost estimates for each scenario, using COST, and we compared these with the base estimate to identify and explain the major differences associated with each alternative scenario. To identify and assess the strengths of the United States and the UN in leading peacekeeping operations in Haiti, we obtained and analyzed UN reports and evaluations relating to MINUSTAH and information on past U.S.-led operations in Haiti. We interviewed officials from DOD, the Department of State, and the UN, as well as peacekeeping experts from the Stimson Center in Washington, D.C., to discuss their views on factors that contribute to successful peacekeeping operations. We also reviewed published reports from various organizations relating to the effectiveness of UN and U.S. peacekeeping operations. We conducted our review from June through February 2006 in accordance with generally accepted government auditing standards. Key contributors to this report include Tetsuo Miyabara (Assistant Director), James Michels, Charles Perdue, Kendall Schaefer, Suzanne Sapp, Grace Lui, Lynn Cothern, Joseph Carney, and Sharron Candon.
The UN employs about 85,000 military and civilian personnel in peacekeeping operations in 16 countries. The United States has provided about $1 billion annually to support UN peacekeeping operations. In addition, the United States has led and participated in many such operations. UN reports and congressional hearings have raised concerns about accountability for UN peacekeeping operations and the need for reforms. We were asked to provide information relating to the cost and relative strengths of UN and U.S. peacekeeping. In particular, we have (1) compared the cost of the ongoing UN Stabilization Mission in Haiti with the cost that the United States would have incurred had an operation been deemed in the U.S. national interest and undertaken without UN involvement; (2) analyzed factors that could materially affect the estimated costs of a U.S. operation; and (3) identified the strengths of the United States and the UN for leading the operation. We developed our cost estimate of a U.S.-led operation using cost models from the Departments of Defense and State. The estimate is based on various military assumptions, such as the use of primarily active duty troops. It includes only those costs directly attributable to the operation that would not otherwise be incurred. We estimate that it would cost the United States about twice as much as the United Nations (UN) to conduct a peacekeeping operation similar to the current UN Stabilization Mission in Haiti (designated "MINUSTAH"). The UN budgeted $428 million for the first 14 months of this mission. A U.S. operation in Haiti of the same size and duration would cost an estimated $876 million, far exceeding the U.S. contribution for MINUSTAH of $116 million. Virtually all of the cost difference is attributable to (1) civilian police, (2) military pay and support, and (3) facilities, and reflects high U.S. standards for police training, troop welfare, and security. Various military and nonmilitary factors can substantially affect the estimated costs of a U.S. operation. We analyzed three military factors: the mix of reserve and active duty troops, the rate of deployment, and the operational tempo. Deploying all reserve troops would increase the cost estimate by $477 million, since it would require paying more reservists a full salary. Deploying troops at a faster rate than the UN--within the first 60 days instead of 180--would cost an additional $60 million. Conducting the operation at a higher tempo--with more intensive use of vehicles and equipment--would increase estimated costs by $23 million. In addition to military considerations, including nation-building and development assistance activities in the scope of the operation would increase the cost significantly. Official donors, including the United States, distributed $382 million for these activities during the first year of MINUSTAH. Cost is not the sole factor in determining whether the United States or the UN should lead an operation, and each offers strengths for this responsibility. U.S.-led operations in Haiti between 1994 and 2004 benefited from a vast military infrastructure, which provided strong communications, command and control, readiness to deploy, tactical intelligence, and public information. The UN's strengths include multinational participation, extensive peacekeeping experience, and an existing structure for coordinating nation-building activities. Complex political considerations are likely to influence decisions about the role of the United States and the UN in peacekeeping.